Skip to main content

Building Blocks of My First Native App





A few months back when I was at the NASSCOM Tech-Unique (yes the second version of the It-NIketan where I had to deliver a speech last time) it seemed the event mostly centered about how Mobile Web is emerging and how we should embrace it.
In the discussion there we several interesting points raised and people started dragging PhoneGap in. While this was a very interesting topic, it disturbed me for certain reasons.


I raised my worries, and unfortunately the speakers or the panel couldn't come to any solutions. So I am still at a loss for the cons of this method.
Nevertheless I decided to put my mind out on the matter.


I consider myself well-versed in web development, and I was excited about the features that PhoneGap brings to web apps. Going the HTML5 web app route seemed like the most sane route.


Pros
  • You don't have to learn any new languages if you're already a decent web developer
  • It's very quick to prototype
  • Though we didn't end up using it, jQuery mobile is pretty neato and makes it even faster prototype
  • Lots of library options for pretty much everything you could possibly want
  • It's really cool and fun
  • If you so wanted to, you can bypass the app store by hosting the files on a server, and utilize app cache to make things speedy. Changing your app is just changing a web page and its cache manifest file
  • Managing images for multiple devices is a lot easier with CSS and media queries than it is for an iOS xcode project and an Android project with its ldpi, hdpi, xdpi, and whatever dpi.
  • Easier to create vector graphics to design spec
  • Hell, it's just easier to get things to be exactly like the design (except if you care about cross-browser compatibility)

Cons
  • There are a lot of mobile browsers out there (the state of browsers is worse than its ever been in terms of how many different crappy ones we have to support - it used to be just one, but guess how many people are on android 2.x and wp)
  • There are a lot of mobile devices out there with varying hardware, screen sizes, and network speed
  • Some features you're used to using aren't there for all devices (position: static for instance) and since those are likely the crappy devices, using a javascript shim (like iScroll) is out of the question if you care about performance
  • There seems to be some version issues with the facebook-connect plugin for phonegap (cordova) and the latest versions of phonegap on iOS only - To get facebook connect and PhoneGap to work I had to use an older version of PhoneGap
  • Documentation for PhoneGap itself is pretty decent, but it's still new, so not a whole lot of people have reliable information on current versions (at least this was the case 5-6 months ago)
  • Since I had to use an older version of PhoneGap, I found that some of their api functions would cause javascript errors. I had to bypass the sugar they provide and call PhoneGap.exec directly on their com.phonegap.whateverFunctionality - It was ugly, but it worked
  • There are complications with linking to other apps like Google maps
  • I found that saving contacts did not work on all versions of iOS
  • jQuery Mobile + Backbone is more of a pain in the ass than you think
  • Getting neato transitions can be a pain
  • There are less facilities in javascript for modularization of large-scale applications than Objective-C or Java

If your app is simple, then I recommend it. I really did enjoy the process and seeing my web app as an installed app. But, just know that it is more trouble than it appears to be

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…