Skip to main content

Open Source Bridge 2017 :Democratize Virtual Reality

Related image

It's almost no secret that I love Open Source Bridge. There are plenty of super awesome developer conferences out there. A lot of which I never visited. But even among the ones I was fortunate enough to visit, Open Source Bridge is very special.

This all-volunteer run conference has a special place in my heart and this time it was my third time visiting it. The first two times were pretty interesting for me too. The very first time I visited Open Source Bridge was in 2015, to talk about my first contribution to Firefox OS Keyboard. It was my first talk at a conference in the United States, and at a developer conference. Needless to say, I was pretty tensed and excited as well. I still remember that talk, and also OSB made a video of the talk so that I would not forget -_-

In 2016, I was invited to present a talk regarding some IoT work which at that time was still important to Mozilla. OSB stopped recording videos from that year, but this resulted in an opensource.com article which you can read here.

Not every day you see a sign like that
This year I was invited to deliver a talk on WebVR and Aframe titled "Aframe: Your weapon in the war to democratize virtual reality". The talk was immediately after LinuxCon Beijing and the dates actually clashed. This time the conference was from 20th to 23rd June. Where 23rd was unconference day. I only had 23rd left and they actually gave me a slot on 23rd! The unconference day.

I reached Portland the night before and had my slides prepared. Fortunately, a flight back from Beijing is a pretty long one and gives you enough time to come and polish your slides. The next morning my talk was scheduled at 10 A.M. That was the only talk of the day apart from the impromptu unconference style events planned for the whole day. I reached the venue in the morning and was taken to the room where the talk was supposed to be held. Most attendees generally leave after the main talks and since this was an unconference day, I really was not expecting too many people. And when I started the talk, there were exactly 4 people in the room, as expected. I was not expecting too many people anyway. Thought it was fine, so started with them and then it all changed.

The unconference whiteboard
Just when I was about to finish introducing me and what webVR is, a bunch of more people entered the room. I just did a recap and started and by the time I was explaining why it is important to have an open ecosystem for the Virtual Reality scene, the room had completely filled up with people standing at the back! That was a complete surprise for me and a huge confidence booster. The rest of the talk went pretty well and very interactive. During my Q&A the audience asked if they can have a longer Q&A and the organizers encouraged me to continue since they didn't have any fixed schedule. The 40 minutes talk ended with an equally bigger super engaging Q&A.

But that was just the beginning of my surprise. Once it was finished and I was packing up, around 12 of the participants came and asked me if I will be available for a unconference type discussion regarding different privacy concerns of VR along with VR empathy and would like to discuss that. This ended up being another 30 minutes of passionate discussion about different aspects of VR and contributing to two pages of my notepad withed up wuth scribbles and ideas.

It really was a pleasant surprise to me that on the last day of the conference how many people were still passionate about the topic that they filled up the whole room and also took initiative to have a discussion on the technology and social aspect. My only suggestion for the organizers was to have the conference on weekends instead of in the middle of a week. That would have made it much easier for me to juggle my schedule and I assume was similar to a lot of other attendees too.

Though it really was just a one day experience for me, I really left the conference that day with a high note.


Do drop a note if you were present at the talk, and have any suggestion for me to improve. 

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…