Analyses, Designs, Design Decisions – and a Conference in Berkeley

We are in the middle of working on our POC and analyzing and validating some core aspects of the technological and economic model. We now set up a Github repository, which we will open up after the POC, first to a select number people, and soon afterwards turn it into an open source project. 

App Development

The app development portion of the POC has stepped beyond the analysis stage. The basic UI screens are out and they look awesome! See below. I like the modern and a bit “in your face” message it sends. It also seems like a soothing merger of a more greenish approach to a traditional blue. Great job, Pooja. 

Dominic will now integrate it into the basic architecture that we have agreed on, using IPFS for the genome files, IPDB for phenotype data, and Ethereum for the coin issue and wallet. We will not go crazy about data sizes of genome files. Our very first take will use variations of a small genome file, although in the often used . fa format. It will be just 50k, and reflect the genome of a virus. 

All basic processes and techniques and technology components can be tested and validated with such small files, and then we can look at larger files with human genomes at the latest during the prototype. In principle genome data looks very much alike across biological species. 

Another area we have continued to dig into is the Heal-ID biometric ID. Our first step is to “just” use facial recognition when determining a unique ID for a person. There are lots of apps out there for facial recognition. ZoomLogin is one of them, and an open source one is OpenCV. At this point we are analyzing multiple options to proof our concept, although it already looks like our Heal-ID approach does challenge conventions. 

The real costs of Gene Sequencing 

We did some analysis on gene sequencing costs, which will determine the viability of our approach for genome data acquisition. On example is Illumina. In the beginning of the year the prices for their highest end machines was around $850k to $985k, able to sequence about 18,000 genomes a year. Not considering the price for reagents, the price of $1000 per genome seems quite inflated. Even with an additional 20% for maintenance, the price of a fully used machine, when used for just one full year would be barely above $60. Use it for three years, and those cost would be $20. 

This is in line with a 2014 article in Nature that broke down the costs for a (then) $1000 genome sequencing effort. The overwhelming portion of the cost was reagents ($797 per genome), and only then depreciation for the machine itself ($137, so roughly in line with the above $60 number for 2016), and labor cost of about $60, although not including electricity. 

A rendering of color-coded identification of nucleotides during gene sequencing

Machines from Oxford Nanopore do not need reagents. Let me be a bit optimistic and say that $100 sequencing is definitely on the near-term horizon. At least some of their machines, like the small, hand-held Minion, require external computing power. 

Now I will be very optimistic and say that with (quite) a bit more creativity in the process, we may be able to push the price down into the dimension of $10 within the next about 3 to 4 years. This would be “nothing” compared to the cost of almost any kind of medical procedure in any country.

Key takeaways from the “Crypto-Economic Security Conference” (CESC) at Berkeley University

CESC was by far the most detailed and technical conference I attended, and had the largest number of presentations. Although by now I can drink big gulps even in detailed technical and economic aspects, it still felt like a firehose. I particularly enjoyed the detailed discussions of Nash equilibria, SNARK composition, zero knowledge proofs, likeness, Verifier’s Dilemmas, extensive Game Theory and economic modeling, sharding, state channels, gossip, and of course the ever present Byzantine fault tolerances and primitives…:-)

For encryption and privacy protection – and for scalability – we will need to dig in more detail into ZeroCash. Alessandro Chiesa from UC Berkeley gave a very impressive presentation. Also related to this area, looked very promising to address scalability challenges, and the presentation even outlined how it can be used to manage distributed computing requirements. 

The third technology we already have been exploring for a while: Filecoin. Ryan Zurrer from Polychain Capital talked forcefully about the need to motivate whom he calls the “keepers” (of the infrastructure). This point keeps coming up and it definitely hit home. Similarly meaningful was his proposition that Filecoin may essentially be offering storage “for free,” meaning at such a low price that storage costs become negligible for most practical purposes. 

Above all, though, I am convinced that NOW the White Paper can be completed, and NOW I start having a really good feeling for whom to approach and how new technology pieces fit into the picture. CESC somewhat alleviated my concern surprising about Blockchain possibly being unsuitable for our requirements of anonymity and very high performance. New consensus mechanisms and algorithms seem to get these aspects under control. 

And the next conference, in Santa Monica, is just around the corner. My co-Founder will join me on that one…

By | 2017-11-20T02:19:43+00:00 October 7th, 2017|Regular Updates|0 Comments