Skip to main content

Mozilla Festival 2016 : A Recap through the journey

Image result for mozfest 2017


MozFest is an annual, hands-on festival for and by the open Internet movement. Every year, bright minds from around the world build, debate, and explore the future of our lives online. In this publication, we invite everyone to share their thoughts and start conversations.

And I had the unique opportunity to attend and take three sessions for the 2016 chapter. MozFest is a unique learning experience and a journey in itself. There are so many interesting things and session that follow those three days that it becomes almost impossible to follow everything (more on that later). So I will focus my narrative mostly on my sessions.

Though technically I had three different sessions. One of them was supposed to go on for all of the days and once we reached we had to immediately start setting things up for that one.

Session 1: Painting with Virtual Reality and A-Painter

The primary goal of the session was to showcase the capabilities of A-frame and what a-painter can do. We had a lot of fun throughout MozFest getting hundreds of digital paintings being made and shared.

The hardware for the session required us to have a HTC Vive and a really beefy machine. Those two were shipped form Mozilla Berlin office. Me and Dietrich had an awesome adventure first day setting them up with a keyboard with non-us layout and configuring steam.

Lesson Learnt: Always figure out the exact version of api and chromium the demo can run smoothly before updating everything. You may not be able to downgrade and have to look for hacky solutions.

Session 2: Are you well? When connected devices know more about you
This was a talk I came up as a progress and extension of a talk I did with Dietrich a year back in OpenIoT Summit. The concept came specially handy when we had Mozilla’s Project Magnet and beacons taking over. For Mozfestation (a unique challenge that Dietrich hacked overnight) we had almost more than 100 Bluetooth beacon throughout whole building. And participants had to tag all of them. This talk became the perfect test-ground for that since essentially I deal with a way where just by being a passive observer and walking with my mobile i should be able to locate all the beacons. Just by form their relative location and Signal strength. And more importantly the opposite. If we had way to record data form these beacons, I could potentially figure out where the person was walking throughout the building. I saw this as a huge privacy leak, and something you just cannot solve immediately as this is all done by passive observation. I wished to and still wish to spark more discussion, brainstorming on how can prevent this kind of privacy leaks. A short account of my talk slides are below. Do leave a comment if you have any questions


Disclaimer: This talk does have more technical details than my normal talks have. So don’t hesitate to poke me in comments if you are curious about anything!

Session 3: Build on WebVR and touch to interact! Come Augmented Reality for the web!

This was a combination of my WebVR talk along with demos of what we can do with a little touch of Augmented Reality and how we can use our mobile phones to actually interact and touch (using back-camera and our trust getUsermedia() ).

The session however was a little in disarray after a mishap about timing/rescheduling and miscommunication. At the end I had to informally meet with the people who wanted to attend and show them.


Overall just like every-time MozFest was super fun, super exciting, super exhausting and filled with memories I cherish.

Also not to mention I get to meet friends whom I have very less chance to meet throughout the year, notably Flaki, Ioana, Michael, Ellio, Priyanka, Soumya (who incidentally had a talk about “Machine Learning for the Masses”, no idea how he pulled that one off).



Later at my other talk at Met’s Office in GraphicalWeb when I was talking with some other speakers, they brought one concern to me. That slowly MozFest has become too condensed. Too many sessions, too many things going on to concentrate on anything. I don’t necessarily feel that is a bad thing, but that is one feedback I got.



Will be interested to see how all of you feel about it, specially my second session.



Happy Hacking! 



And if you are near New York and interested in IoT, privacy and security. You might drop by at Princeton where I will(might?) be present at the adversarial design of IoT used at Home.



(that “might” stems from the fact that, they have been changing dates like crazy, and might affect if I can attend it or not)

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…