IOT + AI + Blockchain

in #blockchain6 years ago

1_y5TsklsaLbC6L_eIvyKJcQ.png

Let’s see how we can somewhat intelligently, although somewhat naively, compose a product that truly requires all three buzzwords.

I’ve been dealing with developing, deploying and maintaining an embedded machine learning system for close to 2 years now. We currently use machine learning to bubble up events from security cameras and feed it to a web based platform. Embedded is hard, and edge-based machine learning inference is also very challenging.

I want to explore a potential product to solve some of the pain points we’ve encountered and some of the problems we’ve seen in the market for cloud-based machine learning inference.

What are some of the challenges machine learning companies face?

  1. Architecture — this generally requires a PhD or equivalent if starting a model architecture from scratch, but it makes less and less sense to start from scratch every day.
  2. Training — Two problems here: time, and cost. We have a massive gpu server with 8 Tesla P100s for training, and our training time is still quite slow, I can’t imagine having to rent out servers from AWS, the cost would be crippling.
  3. Accuracy — Accuracy is a many headed beast. For our case, one camera’s performance may increase while another camera goes from acceptable to embarrassing. This razor’s edge is a constant walk of guess and check.
  4. Model Management — We have trained (including re-training) around 100 models over the past 3 years. Which version is in production, which version worked better for which situations. Keeping track of this sucks, and requires building some kind of platform just to keep your sanity.
  5. Deployment — Ultimately it would be amazing to roll out a new model in production to a select few devices before committing to the new build. Even with a thorough QA process, nothing will be as good as real cameras from your customers.

What are some unique challenges to edge-based inference?

  1. Network — This is somewhat specific to our application, but sometimes other people’s networks suck, or they are running the service over a 3G hotspot. This presents a lot of problems when diagnosing an issue. For us, this manifests as “I can’t see my camera feed”, whose underlying issue may literally be anywhere in our stack but many times… its just the network.
  2. Bandwidth — You can’t stream cameras to the cloud, get analytics, and take action on those analytics in real time without incurring prohibitive costs. Being bandwidth conscious presents new challenges not often encountered when doing cloud-based projects.
  3. Compute — Sometimes you can require a customer to have a GPU, other times, they want a cheaper solution. Either way, generally it’s never quite as much as you’d prefer. Balancing accuracy, inference time, and general capacity of each edge device (in our case, the number of cameras that can be run per edge node) is a struggle. A lot of time must be spent managing expectations, because those unfamiliar with machine learning are often expecting “Minority Report”, when in reality the state of the art is only at the “Hot Fuzz” level.
  4. Security — Damn is it hard to secure embedded devices that are physically accessible. This is why entire companies like Wind River exists. Want a secure embedded os with your app in it? 100k to start.

So what would a holy grail solution look like?

  1. Pull an arbitrary model off github.
  2. Retrain it with my data, or purchase data I need, or upload data I need annotated and have it be annotated immediately.
  3. Measure it’s performance for my use case.
  4. Save the model with the MAP and other key metrics.
  5. Automatically optimize itself for inference based on my deployment target.
  6. Deploy it on an embedded device.
  7. Auto-generate a simple api for inference.
  8. Remotely monitor the health of the node and performance of the model.

I would pay a lot for that system. Even if it was relatively expensive, I can say first hand it’s not as expensive as stumbling through the learnings.

I’ve seen companies like Algorithmia doing something similar, their angle seems to be serverless, but the outline above would be specific for IOT and edge compute, and from what I can see, we’re moving more and more towards Fog computing and inference being done in layers on the edge.

IOT — Check. AI — Check… Blockchain?

Here are several ways blockchain could introduce good incentives, as well as serve a real business use case:

  1. Sell your data. Use blockchain to save your data, and generate tokens when you allow others to use your data.
  2. Keep your data immutable, distributed, and private. The edge nodes could sign the transaction and encrypt the data, and only the master keys can decrypt the data (perhaps not even the node can decrypt the data).
  3. Train your own model, and let other people rent it. Charge a usage fee, paid out in tokens based on usage.
  4. Rent out GPU space for training when you’re not using it.
  5. Blockchain enabled federated training.

A reasonable case could be made for a utility token here, and the initial product would be an enterprise grade solution that could be sold on some kind of per node or per second inference basis. I like that there is a tiered way to incorporate blockchain and that the blockchain modules could be used with or without a crypto currency.

The end goal of this project would be to democratize AI to the point where any developer could create an affordable IOT project that leverages machine learning, and where the AI + Blockchain portion could be created, trained, and deployed without writing code, allowing the developers to focus on their application of the technology.

Originally posted on Medium

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 59483.95
ETH 2714.85
USDT 1.00
SBD 2.50