Skip to main content

DeveloperWeek 2017 and a trip among Bay Area developers


I recently had a chance to present at DeeveloperWeek conference in San Francisco. According to DeveloperWeek, it is the largest developer conference with 8000 attendees and hence I was pretty excited to represent and present Mozilla and talk about WebVR there.

The conference took place in Pier 27. I had never been to a pier before so it was little weird to find actually a full blown conference building in a pier. My talk was on the second day of the conference and at 9 A.M slot (I really hate morning slots). That gave me chance to go around the conference and look at other talks in the first day, also roam around the expo, talking to different people getting an idea of the audience. I couldn’t quite justify the 8000 developer claim, but the conference definitely was big, sprawled on two floors. They had it divided among topic categories, my talk was part of the VR & AR section. But in real physical location it did not seem to make much difference. The talk’s were being held in specific stages which were sprawled among the very big open space and in between having different booth of companies.


Though this setup allowed a lot of people to notice the stages, it also meant that there was enough ambient noise to interrupt the talk if you had too much audio-visual. I noted this and also marked out where I was supposed to deliver the talk, Pavilion 3 which turned out to be a big room in itself and I felt a little relief that I will not get interrupted by ambient noises. Which, was not really true.
On the day of talk, I decidedly started early. At 7:15 to reach much before time (for a change). I was staying across the bridge, and decided even if I consider the traffic on bay bridge I still should be able to reach by 8, considering Lyft and Google Maps showed me it as a 20 minutes distance. How wrong I was. So after some frantic texts between me and the space wrangler where she kept assuring me they will handle the space for me if I am late, I managed to barely reach the venue at 8:59 AM. So much for me being early. While getting out of my Lyft and dashing for the venue, I heard a familiar voice calling my name. That was Michael! He came all the way from Mozilla Mountain View office to attend my talk! A minute or two later when I was plugging in my laptop for the talk I also saw Sandra from Mozilla SFO to enter the pavilion for the talk. I was so happy and that worked as a confidence booster for me.
Just after the talk
undefined
I eventually started the talk mostly on time and it went great including most of the demos (I need to come up with a better way to show that spotify demo). Sandra was so kind to snap a few pictures of me.


Later that day I was catching up with the people who attended my talk and I realized the prevalent concern among the VR netizens were
  • Lack of way to traverse form a VR scene to other one (Resource)
  • A way to find and discover reliable and reusable components for Aframe (A-Frame Registry)
  • Concern about cross platform (Resource)
  • Questions regarding browser compatibility and how to keep up to date (webvr rocks)

I have hyperlinked the resources with the questions above which will take you to the answers I generally provided them.

One general opinion was that since most of my demo’s were geared towards mobile in Google Cardboard, I could only talk about the interactive demos, but couldn’t actually show them. A suggestion I plan to address in at least some of my future talks.
And that’s how I wrapped up the DeveloperWeek 2017. Of course I hanged around at the pier a few more hours to get a shot of how the city looks from the pier.





And how can my trip get completed without once going to Mozilla SFO office and meeting with Diego, trying out some Mozilla VR gear and having a sneak peak at A Saturday Night in WIP mode. It will very soon be released and you can have a blast at that time!
Me with Diego on top of Mozilla SFO office

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…