Skip to main content

A day at GraphicalWeb 2017 and Virtual Reality

Image result for graphicalweb 2016

November 1st to 4th I was spending my time in the beautiful city of Exeter, just after my 3 day rush hour at MozFest. I had a keynote talk (my first!) at GraphicalWeb.
GraphicalWeb was unlike most of my conferences. For starters, it was hosted by the famous Informatics Lab at the Met Office. The Met Office (officially the Meteorological Office until 2000) is the United Kingdom's national weather service. And informatics lab is part of it.The Met Office Informatics Team are a recently formed inter-disciplinary group who create prototypes to expand and improve the use of environmental data. 

That's me, trying to fix my slides on the way

My first day at the conference, I spent 30 minutes at the reception desk getting my Passport and Visa scanned for security clearance purpose (which we also had to mail beforehand to get our ID cards made). The conference room itself was awesome. And I have never before seen so many scientists, not only developers feel so excited about SVG, Virtual Reality, Problems about rendering and what we can do with them. It felt instantly good to talk more about solving the problem than troubleshooting or just hacking to make it work (which normally constitutes my day for my hacky experiments).

I noticed Mozilla as a sponsor first time in this one

Another very interesting tidbit is, the hotel I was recommended, and scheduled to stay, burnt down just the week before I was arriving. I got a different one, but I did a little stroll to visit what happened to my unfortunate *would have been* residence



I also got to meet Techno Rhino

I had my talk on the last day, and as usual, I decided I would just fine tune a little bit my talk at the night before. Which I must tell you is an awesome idea in a hotel wifi where you are trying to test Multi-User Collaborative experience in Virtual Reality. I must tell you, works like a charm…..

After that unfortunate testing, it became very much apparent if I sleep, I am not going to make it to my own Keynote at 9 A.M. Yeah, it’s not that fun when you cannot make it to your own keynote. So eventually with quite a few coffee cup’s later I was finally up on stage.

Ross Middleham from Met Office introduced me cheerfully to the stage, where I was at that moment frantically searching for my power cord, and starting up a python server (I know…..somehow my nodejs build system was having trouble with watch and server) to present my slides.

The talk went pretty smoothly and I could feel the energy of the room once we started going into different possibilities and what we can do with WebVR right now. Everyone from Informatics Lab was pretty excited about the possibilities too


But what got people super excited was what we could do when we mix and match different web capabilities into aframe. I had cobbled up a quick demo where everyone in the audience with a mobile phone could just visit a webpage in my slide and I could show their avatars doing spooky things based on their orientation. Thanks to the good people at Met Office, they had quite a few number of Google Cardboards to try the live demo. That one got people super excited.
Also when we touched on the parts where we can actually visualize inside VR, create charts and explore them


And of course a talk about aframe cannot finish without talking about our trusty A-Painter and A-Blast

If you are interested, you can see the talk here! Courtesy Informatics Lab again



Overall the response I received was incredibly positive. Apart from normal questions and queries I get after the talk. I also received quite a few request/proposal for collaboration with different university design teams. All of which I dutifully noted down so that I can introduce our wonderful Mozilla VR team with them

And that is me, happily posing after my session and on my way back home

Comments

Popular posts from this blog

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…