Skip to main content

Immersive Payment: First Update

It has been more than 6 months now that I have had the Immersive Payments, Spark Grant. Since then I have worked on realizing the scope of the project and finally, I am here to give an update.


The project aimed to solve the following facets of distributed payment in the space of webvr.

  • Can we have micro-payment enabled for small micro assets?
  • Can we have a way for content and asset creators to define a way to have these transactions as part of a self-hosted (or managed) marketplace
The initial concept was simple. Starting with 3d assets and models which a creator normally creates for making a virtual reality or even augmented reality scenes. Instead of having to buy them, can a creator benefit from the audience having more control over having added third party assets.

Initial Prototype:

The initial prototype for the project depends on Spoke and creating scenes for mozilla hubs. We wanted to see if we can have extended portions of these scene where some characters can be Monitezied using Web monitized while others are still under free domain.
For the initial experiments we wanted to see if a wholoe web page runnig a simple aframe webVR scene can be monited or not.
As it turns out it is relatively simple to enable web monetization in a simple webvr page and host it. And it's also fairly easy to add conditional's to the page too. For example to 
  <script defer src="cdn.coil.com/monetized-classes.js"></script>
  <style id="wm-stylesheet">
    .wm-if-monetized { display: none; }
    .wm-if-not-monetized { display: none; }
  </style>

This enables web monitization on those elements.

Or we can have a whole scene hidden. As you can see in this demo: https://webm-roomscale.glitch.me/


As you can see the browser first tries to verify if there is already a subscription attached to the coil account and enables the WebVR scene based on that. (caveat: you will need coil extension installed and logged-in in your browser if you want to test the URL)

Our next step was to see how can we have this implemented in individual elements of the scene. If you want to know more about adding web monetization on specific elements on your website. You can read the excellent guide here.

As a next step we start by using spoke for creating the vr scenes for integrating into Mozilla Hubs. The goal here is to have monetization enabled for separate assets inside the scene. 

Progress till now:

  • We can now enable selective web monetization for specific webvr assets
  • Separate 3d assets can be monetized

Ongoing:

  • Have a centralized place where we can allow asset creators to highlight alternate story paths associated with exclusive assets
  • Have a way of enabling alternate story paths for webvr relates stories (this is enabled by creating alternate paths for the story to end/expand. However the present challenge is to see how can the implementation be done so that it doesn't cripple the rest of the scene)
Planned:

I was fortunate to get an extension of the deadline. With that in mind, we have plans to actually contribute to the underlying protocol and have a proposal for conditional interactions. Based on our recently accepted work at ICBC which can be accessed here. Under the name "On Conditional Cryptocurrency with Privacy"

This concludes the very short initial update of the project. Some personal health issue along with the present Covid surge in my home country India, where I am right now made periodic updates to the project pretty difficult. A difficulty which I plan to address in the next few weeks.

Two follow-up blogpost will follow this post detailing some of our design choices, codes, and how you, if you want right now, can test some of the features. I would love to have your feedback and directions on the same.

Comments

Popular posts from this blog

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post ( part 1 ), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit. What we will cover today: How ARCore and ARKit does it's SLAM/Visual Inertia Odometry Can we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computer When we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts pro

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications. I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with. Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real wo

VMware in Linux : The virtual machine's operating system has attempted to enable promiscuous mode on adapter Ethernet0

I was just happy when I transported my dev virtual box to my Linux system. But immediately after starting up the system I was greeted with this error The virtual machine's operating system has attempted to enable promiscuous mode on adapter Ethernet0. This is not allowed for security reasons. Apparently the solution is this . Which states that "VMware software does not allow the virtual Ethernet adapter to go into promiscuous mode unless the user running the VMware software has permission to make that setting. This follows the standard Linux practice that only root can put a network interface into promiscuous mode." And the solution it seems is adding a new user group and deligating permission to them. However for CentOS this didn't turn out to be the case. Apparently device nodes are created in boottime in this case and you need ownership permissions for udev to make it work. You can read a detailed description in that link. But in short the commands w