Skip to main content

Linux Foundation Open Networking Summit

The Open Networking Summit took place on April 3-6 – where Enterprise, Cloud and Service Providers gathered in Santa Clara, California to share insights, highlight innovation and discuss the future of open source networking. I was invited to give a talk about Web Virtual Reality and aframe at it.
So, Open Networking Summit (ONS) actually consists of two events – there might be more, but I was involved with two. ONS is the big event itself. There is also the Symposium on SDN Research (SOSR). This is an academic conference that accepts papers. 
There were some pretty fantastic papers at the conference. My favorite one – there was a system called “NEAt: Network Error Auto-Correct”. The idea here is that the system keeps track of what’s going on with your network and notices problems and automatically corrects them. It was designed for an SDN setup where you have a controller that is responding to changes in the network and telling systems what to do.

The event was held at San Jose Convention center and was pretty packed up. 
Keynotes were sprawled across the first floor with a very big auditorium that encompassed the whole of the first floor. The individual talks were assigned different rooms on the two floors.
Poster sessions were held on the second floor near another hall where the accompanying talks with the poster were going on.
The talks were not recorded. I had roughly 35 people in my talk and that was a pretty perfect number of audience to have without being too overwhelmed.
A previous version of my talk is available here. It would be great to have some feedback on it, though the content has changed quite a bit after that

I frankly received quite a lot of interest in the talk and questions regarding it. The questions mostly were involving authoring tool for WebVR and about how we can create scene's that can interact with industrial hardware.
Something that urged me to work on some pet projects I will write on later about.

What do you think about how networking and industry can merge with WebVR and VR in general? Let me know by comments or tweeter. I will be posting soon my take on it with a few live example and demos.

Comments

Popular posts from this blog

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…