Skip to main content

All Hands 2016: MozLondon, A recount

#MozLondon : Mozilla All Hands 2016

I recently had the opportunity to take part in Mozilla All Hands 2016 (a.k.a #MozLondon). Mozilla All hands. All Hands are bi-yearly events of Mozilla where all the paid staff from different teams around the globe meet with each other along with a handful of invited volunteers to disscuss about future projects and get some work done!
This year it was in London and just immediately before Brexit (I actually didn't even know about it before I went there). It was a work week, so essentially the event spanned from Monday to Friday. I arrived at LHR on Monday morning, and then there was the awesome Heathrow Express which took me to Paddington, just a 7 mins walk away from Hilton Metropole where I was staying with a bunch of other people. The event started with all of us having an evening orientation familiarizing us with rules and regulations along with Code of Conduct(that turned out to be really important later on...). 
Tuesday started with a Planery. Which you can see if you are logged in. Talking about planery, our new Dr. Who


And in short this is what a All Hands look like



We also had a short video about what our achievements were. Which you can see the here in it's full glory or just a snippet


The real fun started from next day. As you could see from the schedule it was pretty packed up. And my invitation was from Mozilla Connected Devices - WebVR team. So we all had our plates full. I started with attending a design thinking process and straight took a dive into a connected devices hackathon.

But before we get into it, we had a lovely meet with our Mozilla TechSpeakers team and ended up taking a lunch selfie!


Now diving back to Connected Devices and WebVR. Where people were busy making Steampunk hats and virtual computers...



I though was working on something much less...flashy. But I managed to finish up mixing javascript face detection with WebVR and voile! You could now interact with WebVR objects floating in the sky with your hands. Interactive Augmented Reality a-la-carte. I was so happy that I literally bounced a couple of times. If you want a demo head over here!

Do note, you will need Firefox Nightly installed in an android device to actually play with it. Theoretically it should work fine in Chrome too but Chrome now does not allow getUserMedia (the api I am using to access camera) to work from no https domain. And my test server is....http.

Also if you want to poke around. Head over to my github repo (index3 is your friend).

With that in hand we headed over to our end SteamPunk party!
Oh and before that a Connected Devices selfie...


The steam punk arty was awesome and I got to hangout with some of my very dear friends whom i don't get to meet very often.

Of Course you need Captain Rogers to rock a party
Of Course we have Rosana!

Nothing would have worked out properly without Havi!

the best team ever! 
And of course

Don't mess with us!
And did I forget

Thats Thunderbird and Foxylady in our hand if you are wondering...

And how can it end without our Foxy!
Overall it was a great experience. And I am glad I got some work done. even though it's very half baked, it works!

I enjoyed every bit of the week. It is amazing to see how much you can accomplish just by sitting in the same table with others and working compared to working remotely and clarifying doubts over irc/email.

Meet the new Dr.

And nothing ends when you don't come out of a TARDIS!

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…