Merging Man and Machine

Written By: Sachin Patel

The intersection of man and machine is a reoccurring theme in sci-fi films, but one may wonder if these stories are tales written to entice curious minds about the great unknown, or is it actually futuristic foreshadowing? Tesla CEO Elon Musk is a believer in the latter. Over the past few years Musk has discussed the idea of humans “merging” with machines in the near future as a way to ensure that humans are not rendered obsolete.[1] Unlike the creative minds of Hollywood whose scientific feat stops at their pen, Musk has been devising ways to make his idea reality. To do this, Musk plans to use a “neural lace” which integrates AI in the human brain. This would allow users to communicate their thoughts to computers at incredibly fast speeds.[2] If this can in fact be achieved, it would lead humans farther along the spectrum of innovation and advancement to a place where we can compete with AI on a more leveled plane.

This potential transformation of the human- machine relationship is exciting, however as with any new technology, it will be accompanied by its own set of legal hurdles. Looking forward, an emerging issue in this area of technology would likely deal with liability. The essential question to consider is: Who will be liable when communication between man and machine becomes more than a “bad thought” and someone is injured because of it. Jeremy Elman, a partner and the head of IP and Technology practice at DLA Piper Miami, wrote an article for TechCrunch regarding this very question.[3] Asking such a thing seems bizarre and wrong because most people see man and machine as sci-fi fiction, and not reality. Elman is looking ahead at the “what ifs.” Specifically, what happens in the event that AI causes harm to something or someone?[4] This becomes a problem of controlling the actual machine, yet Musk believes his “neural lace” can help resolve this issue. As fictional and Terminator-esque as it seems, the threat is quite real; not of cyborgs, but of man manipulating machine for ill gain.

To rectify this, Elman believes that a standard of rules should be created and adopted for the production, development, and use of AI.[5] This would be an institutional standardization of the “ideal neural network,” proposing a punishment to those who failed to adhere to this standard. [6] Thus, whether with self-driving cars or for the “neural lace” when communicating with devices, manufacturers and users would be increasingly cautious when using AI. It is up to the courts, however, to legitimize such a standard, which would be the governing body to which AI designers and users would adhere. All AI applicability would have to adhere to the standardization, including Musk’s “neural lace,” if and when it becomes reality. Although, these rules may establish human control, problems of human redundancy, and the threat of becoming obsolete, will remain and persist until we adapt to a world where man and machine are one.

 

[1] http://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html

[2] http://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html

[3] https://techcrunch.com/2017/01/28/artificial-intelligence-and-the-law/

[4] https://techcrunch.com/2017/01/28/artificial-intelligence-and-the-law/

[5] https://techcrunch.com/2017/01/28/artificial-intelligence-and-the-law/

[6] https://techcrunch.com/2017/01/28/artificial-intelligence-and-the-law/

 

 

Sachin PatelSachin is a first-year law student at the University of San Francisco School of Law. He completed his undergraduate education at UC Davis where he studied Art History and Political Science. 

 

michael