Monday, December 21, 2020

The Version 0.1 Problem

The Version 0.1 Problem is when someone who is predicting that a radical new technology will exist in the future forgets that even a very simple version of the technology (a version 0.1) will have such large disruptive consequences that it is very hard to predict which path technology will take after the version 0.1. 

An example. Suppose that you are living in the 1980s, and dream of a fully immersive virtual reality cyberspace. Your vision of the future looks like the 1980s plus VR cyberspace. The problem is of course that version 0.1 of this technology is to just have people looking at simple screen interfaces with text/wireframe layouts, i.e. the internet. And the internet alone is so radically new that it is hard for a 1980s person to predict what the next version will be after the internet-of-screens, if there will be one. 

Another example: androids. Necessary prerequisite technologies for androids are near-human AI and superior mechanic control technology. A step on the way to making a control and motor system that is agile enough to move like a human, is a robot whose control system is not quite so agile, but thanks to the greater configurability of the robot's strength, degrees of freedom of movement, extra sensors etc, the robot is still much more valuable as a manual labourer than a person. This is not science fiction, but rather the reality of industrial robots. The robots that actually look like humans are a small percentage of the value of the robot market. A prediction from this is that as robot technology improves, we will see a lot of experimentation with the form, and that the form will not converge to a human form in most cases. The same argument applies even more to the "brain" of the android. Once again, this is not science fiction but the reality of the artificial intelligence industry. The AIs that are supposed to mimic human thinking are a small part of the market, and I think this will keep being true in the future as well. Most of the time, users just want a specific problem solved (good search recommendations, efficient fraud detection, well-optimized production schedules, etc), and the AI having a personality is not conductive to that end. 

A third example: generation ships. Long before anyone ships humans off to a different star system at some immense expense, someone will have just sent some robots to do the same mission, at a considerably less immense expense. It is already true for Mars. Perhaps humans will be building bases on the Moon and Mars before remote-controlled robots can do it, but I don't think it's likely for any places after that. It is sobering to think that whatever planets humans ever set foot on in the future, they will have been preceded there by robots that built a relatively cushy life support system there for them. 

A fourth example: brain simulation. Simulating a human brain neuron-for-neuron seems extremely wasteful. A computer capable of simulating even 0.1% of the neurons of a human brain can most likely be used for much more valuable economic purposes than to simulate a mouse brain. 

Here is my explanation as to why the version 0.1 problem happens. Consider a 2x2 matrix, with hardness-of-imagination on one axis, and hardness-of-implementation on the other axis:

What is common of technologies that are easy to imagine, but hard to implement? Typically, they are just a copy or intensification of something that already exists in our surroundings. The idea of the android is obviously not very hard to think of, just "a machine that looks and behaves like a human". As a matter of fact, the motorized car, boat, airplane, and submarine were all in the easy-to-imagine/hard-to-implement category since at least the 1200s, since they were speculated by Roger Bacon. 

A summary could be that version 0.1 problem exists because it is harder to think of good businesses, than to think of popular science fiction. 

No comments:

Post a Comment