Skip to main content

Help finding a cure for COVID19: Set up [email protected]

Image result for covid19

COVID-19 has been already assigned as a pandemic worldwide now. It is now a global problem affecting all of our day to day lives. Almost all the methods of combating the virus at our disposal at this moment are essentially social distancing and maintaining hygiene until a cure is discovered. And while I write this post the death toll worldwide is 17,159 with 470,973 people infected. Be sure when you read this, these numbers have already swelled up.

So while we wait and do our bit by maintaining social distance, I am here today to show you how we all can pitch in our compute power to help researchers find a cure for COVID19.

Why Protein Folding? Why should you care?

How a protein “looks” in 3D is essential for developing new drugs, especially for new viruses. COVID-19, for example, has really spikey proteins that jut out from its surface. Normally, human cells don’t care—they won’t let the virus inside. But COVID-19’s spikey proteins also harbor a Trojan Horse that “activates” it in certain cells with a complementary component. Lung cells have an abundance of these factors, which is why they’re susceptible to invasion.
Protein folding has been a decades-long, fundamental problem in biochemistry and drug discovery. Almost all of our existing drugs grab onto certain proteins to work, so identifying protein structure is akin to surveying the enemy landscape and figuring out the best attack point simultaneously. The problem is the genetic code doesn’t translate to how proteins look. When it comes to a new virus, without predicting protein structures we’re basically fighting viruses and diseases as if they were the Invisible Man.

Bottom line: if a drug is going to “fit” into a protein like a key into a lock to trigger a whole cascade of nasty reactions, then the first step is to figure out the structure of the lock. That's where [email protected] comes in.

[email protected]

Some of us used to contribute our computing power ar [email protected] which dedicated all those compute power finding ET throughout the universe. While that project has just sunset, we now have [email protected] which dedicates its computing resources on protein folding to find cures for various diseases including COVID-19.
It simulating the dynamics of COVID-19 proteins to hunt for new therapeutic opportunities.

What do you need?

  • A computer running OSX/Windows/Linux
  • A stable internet connection

How do you set it up

Setting up a [email protected] instance at your own PC is very straightforward. Just follow the steps below
For windows, you won't really need to do anything else
  • Once the install is done you will be shown a welcome page and then your configurable options. Please see the video below to know step by step what you can configure more


(if you cannot load the video, here is a short animation on the steps)
The video has step by step voice over guiding you through the parameters

In the end, you should have a dashboard like this and happily helping researchers getting a cure for COVID19
This is the final dashboard showing your contribution parameters

Optional Parameters:

You don't really need to set a teamid. But if you decide you use this teamid we as a team would be contributing towards the effort and can see ourselves as a team (and also individual) in the leaderboard.

The Team details used are:

TeamID: 248733
Name: Covid19India

Comments

Popular posts from this blog

ARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.
What we will cover today:How ARCore and ARKit does it's SLAM/Visual Inertia OdometryCan we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computerWhen we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from cam…

ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.
I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.
Mapping: feature detection and anchors Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors. What are anchors: Anchors are your virtual markers in the real world. As a develope…

Visualizing large scale Uber Movement Data

Last month one of my acquaintances in LinkedIn pointed me to a very interesting dataset. Uber's Movement Dataset. It was fascinating to explore their awesome GUI and to play with the data. However, their UI for exploring the dataset leaves much more to be desired, especially the fact that we always have to specify source and destination to get relevant data and can't play with the whole dataset. Another limitation also was, the dataset doesn't include any time component. Which immediately threw out a lot of things I wanted to explore. When I started looking out if there is another publicly available dataset, I found one at Kaggle. And then quite a few more at Kaggle. But none of them seemed official, and then I found one released by NYC - TLC which looked pretty official and I was hooked.
To explore the data I wanted to try out OmniSci. I recently saw a video of a talk at jupytercon by Randy Zwitch where he goes through a demo of exploring an NYC Cab dataset using OmniSci. A…