Skip to main content

MobileDay Mexico and Mixed Reality

https://sg.com.mx/sites/default/files/styles/teaser/public/images/Logo_MobileDay.png?itok=TrMYCMiA 

I was invited for a talk at Mobile Day, Mexico last week on 31st October. Mobile Day is a day of conferences aimed at the development of business mobile applications. It consists of a series of lectures, workshops, and a showcase. Mobile Day 2017 was focused on Cross-platform mobile development, especially Progressive Web Applications, Cognitive services (chatbots, recognition of language and images), Virtual and augmented reality and User experience design for mobile applications.
I was speaking about Web Mixed Reality at the session. Though initially my session was focused on WebVR and AR, "Virtual Reality for Humans: Build your world using WebVR and aframe" judging from the reaction of the audience and what interaction I had with them, I tweaked it to include the full spectrum of Mixed Reality.

The event was held at "Salón Los Candiles Polanco" which essentially was housed a Hospital as well. The venue was a spacious one and my only complain with the venue would be the sound isolation. The conference room was pretty spacious and more importantly, it was all filled up.
Most of the talks were in Spanish, but I followed the slides to understand the concepts. The talks were 45 minutes long with Q&A included. Which made it the perfectly timed talk for mine. There were three talks related to VR and AR in the conference. The first one was talking about ARkit, mine being the second one. I started my talk with a little history of VR,AR and how the scenario is today. The first half of my talk concentrated on the technologies involved, their overview and introducing Aframe. 
In the second half, it was mostly live coding and demos. Deviating from my normal flow, in this case, I decided a step by step live coding using community components and then building on top of the previous live demo to show something more complex was the right thing to do. And it worked out pretty well. The audience seemed pretty engaged. I had my concerns if the audience will be engaged enough since my talk was the only non-spanish talk in the whole conference. But I was pleasantly surprised to see the whole room filled up.
Picture Courtesy: Luis 

If you want to have a look at the live stream here is the link in Facebook Video. I will update it if I have a better version from the organizers in youtube

The live stream surprisingly had 700+ audience when I checked after the stream. I am surprised and humbled. If you are one of the 700, and have any question, feel free to reach out to me via twitter/email or by the comment section! I would love to chat.

I tweaked a little bit the way I normally present and decided to try a new way of doing Q&A. Since this was an audience who were not native English speaker, I figured it might make sense to have a written Q&A than  a vocal one. Also that favors the time crunch. I introduced a Q&A link on top of my slide for the whole talk (except demo) where anyone could go in between the talk and I can answer the question written there later. We did take live Q&A's too but as I guessed I had time to take only three live questions before I start creeping on my next speakers time slot. The method worked alright i guess since I had 4 more question there and a lot after the talk outside. Let me know if you think this method makes sense, I am planning to use it in my next talk at Immersive Technology Conference at University of Houston.

And if you want to have a look at the presentation they are available here


The pressing question mostly was what I think will be the better use case for using VR/AR and where the companies should invest. The short answer I gave was to focus more on the users and what is their need. To build better user experience and choose the tech stack of VR/AR that allows them to do that.

After the conference there was a short afterparty where I was introduced to a plethora of new Mexican foods and a new beer! Luis and Yuli from Mexican Mozillian community was also present there accompanying me.

After the event we went to another Facebook Developer Circle meetup (to hear about a react session?) and ended up in the state below
Find me!


Trying to decide what to get on the menu
Overall the experience was awesome for me. Not only the conference the whole experience of visiting Mexico and it's amazingly talented community of developers. Not everyday I get to meet a bunch of Facebook Community members who become increasingly interested in ReactVR, WebVR over a plate of Tacos!

Handling a bowl of Pozole
The only thing I enjoyed more was the food, and the street food. It was absolutely amazing! And that is coming from someone who lives at Houston and has access to most of the Mexican foods (now I know not all taco's are created equal...).

I got to meet with a lot of awesome people at Mobile Day and loved the conference (would have loved more if I knew Spanish, time to install Duolingo). And all the credit for taking me to different places and meet with these wonderful people falls to Yuli and Luis, our Mozilla Mexico Community members. They just made my whole trip immensely enjoyable.

সূর্যদেবের বন্দি(Prisoners of Sun):I wanted to go there since I was a kid

That mostly sums up my experience. As always feedback is appreciated and immensely valued on the talk. If you see the video and have my comments or suggestion you want to make. Please do get in touch via the comment section or by my twitter handle. 

Till then!

Comments

Popular posts from this blog

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…