Skip to main content

OpenSource Bridge 2016: Back among the Open Source Citizens


 

I recently gave a talk in OpenSource Bridge in Portland. This was my second time taking a session here, the first one being on Firefox OS (sigh) last year. My session was on 23rd June, the penultimate day of the conference and the same day when I finally reached portland at 12:50 AM. Since the talk was at 10:45 AM. I spend the rest of the night quickly juggling up my slides and last minute code pushes in github.

Last minute talks are nothing new to me, being the star procrastrinator among all the Mozilla TechSpeakers. But this time I was a little nervous and wanted the talk to go smooth since this had generated a little bit of attention due to an awesome article from Benjamin (which you can read here).

And the tweets followed




And a few others. So I thought...no pressure...no pressure....and then...I still don't have my slides prepared. (double sigh).
And of course when my friends started telling me I was being featured in their LinkedIn Stories

No Pressure, Right?

Then comes the day (actually it already was the day) and I arrived at the venue. Traditionally last year as well as this year it was hosted in Elliot Center, just in the middle of Portland Downtown (or I thought it is). I always love the way they organize #OSB, and specially how laid back and relaxing the hacker lounge is. And did I mention the have a lot of charging ports? A LOT of it?

I quickly grabbed a cup of coffee and went to my assigned room, just to find it completely full and people were standing on the back to hear! No Pressure.... The talk however went surprisingly smooth and the audience really liked it! This time just for the kicks I had also given a QR code in my slides along with a github link to the codes I had talked about, and I saw a few people actually scanning it too along with snapping a pic (yay!). And that brings me to my metrics (of course I had that qr code tracked!)


Not bad I thought, taking into count that they checked out the code from the room itself(and just the QR code was tracked)! I had 5 mins in my mind for questions, which spilled over of course. And the session planner didn't mind since he had a few questions of his own too (more yay!). That kind of continued through my walk to have some food with some other speakers and incidentally my audience.
Meanwhile Benjamin managed to catch me while answering a query


After the food I attended a few more sessions and wrapped up for the day. Of course before wrapping up noticed how we were plotting for world domination....



OpenSource Bridge always stands out from other conferences fro being a more intimate experience for me. You get to see so much diverse talks in varied topics by so many awesome people. And you can actually talk and engage with them. Not always the case for big conferences like OpenIoT.  Also I found it always easy to engage with people in a closer group and get feedback about my talk. That always helps me to try to get better. And OpenSource Bridge is a awesome platform for me in that respect.

And how can I end a trip report without reporting the reaction? So from next day (actually from that very day) in my twitter...


I can neither confirm, nor deny talking about a rooted device :P
Always humbled by such response

That Sir, didn't cross my mind...
This is the best outcome I ever want from any of my talks. People getting interested and using it to build something!

I ended OpenSource Bridge 2016 in a very good note and actually being inspired (and I thought #Mozlondon had me inspired already!)

Obligatory "Distant looking" snap. Courtesy: Benjamin

Most of the pictures I used in the blog post are snapped by Benjamin Kerensa. You can check out his flickr album if you want to have a feel of how it was in #OSB16

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…