Skip to main content

IoT, Analytics and Data: A day with Intel Edison,GE and OpenWeb

Just a few days back on 25th October, I received an invitation to attend a Internet of Things workshop by Intel and GE Digital showcasing what edison and the predix platform could do. And I realized that’s pretty good chance to see what javascript and openweb can work and do in Tandem with it.
So on a early morning of 25th October I hopped on to an Uber to go to “Work Lodge”. A nice little co working place in the middle of Houston and discovered

img-alternative-text
img-alternative-text
img-alternative-text

When that was over we started setting up for the workshop. The most interesting thing I noticed from the organizers were how they were planning to distribute the wifi among all the 60 participants.

img-alternative-text

Pretty clever I must say. good way to distribute wifi load in a workshop, specially in one where everyone will probably connect atleast 4-5 devices to wifi (including the IoT board).

img-alternative-text

Then we satrted builidng our prototype and projects. Mostly what was used was Intel Edision boards along with a combination of sensors. I had used a combination of temperature,light and humidity sensor. And evantually what my project became was a time series analysis based on those data. That could effectively log the changes and act on any sudden change based on those parameters. I used the predix platform to quickly get it upto running. Some nifty python scripts I cokked up running in the edison board helped to get the streaming data into predix cloud where I did the timeseries analysis. There wasnt much more time This is how essentially it looks like in the dashboard with my 3 sensors and "wind turbines"

img-alternative-text

And the gig

img-alternative-text

This was pretty fun thing to do. And i got to enjoy a lot. I also made a lot of people go through the part of how you can connect it to IBM Watson and do a lot of stuff with that. Not being bound to any platform. Alos how to actually sanitize your data and send it to the platforms for analysis and still get the same kind of results predictions (essentially how do you sanitize whil keeping data mapping). This was all lot of fun and another reminder how IoT can help us do a lot of things.

At the end a glimpse of the audience

img-alternative-text

Comments

Popular posts from this blog

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…