Elon Musk, at his Neuralink presentation, claimed that his human-computer interfaces have allowed rats and monkeys to directly control a computer with their mind. The technology allows thousands of tiny electrical probes, smaller than the breadth of a hair, to be implanted into the brain to control computers. The implants are ultimately intended for humans.
Human-computer advances mark a significant step for humanity. Such implants show promise for correcting spinal cord injuries and congenital defects and provide hope that humans will be able to harness the full power of artificial intelligence. Humans and computers are on the cusp of integration.
The tech elite, Musk and his cohort, are determined to redefine humanity and boldly step away from tradition. Their ship sails on the winds of unlimited human potential and notions of physical wholeness for all. Who could question such a bold vision?
We are at the dawn of human augmentation and the line between humanity and technology is becoming increasingly slim. Do we still have a compelling answer when we ask, “What does it mean to be human?”
The industrial revolution dramatically altered the meaning of work, while the atomic bomb revolutionised the meaning of science and humanity. AI and human augmentation will revolutionise the meaning of being human even more fundamentally.
To the systems we interact with on a daily basis—Apple’s Siri, Facebook, Instagram, Google and more—humans are nothing more than data points. Data increasingly defines the human being. Yet, instinctively, we all know ourselves to be more than data.
Positive AI theorists believe that the incoming technology heralds a new age of creativity and flourishing for humanity. They contend that if we have more time we will find more interesting things to do; more people will become artists, creators, poets and storytellers.
Reality tells a more sobering tale. Netflix’s Reed Hastings stated in 2017 that the company’s main competitor is sleep. Their company’s AI is designed to hook consumers on entertainment, and it’s working.
There is every chance that in the future humans will not do and create more, they’ll simply consume more entertainment.
When entertainment and consumption become the purpose of humanity, humans become a cocoon in which the pupa has died— an image of unreached potential. If, on the other hand, we are able to recognise the things that make human beings human, AI has all the potential to enable us to flourish.
Government regulators ought to pay close attention to the tech elites. Where shareholders demand profit, the desire to advance the ethics or assess the human impact of AI begins to fade.
Engineers depend on first principles. Basic truths, like ‘water runs downhill’, become the framework from which engineers develop their designs. When engineers ignore or forget first principles in the name of innovation, the design inevitably fails to serve the original intention. If the engineers of a future AI believe that progress comes first, such systems will fail to serve humans.
Instead of progress, the first principle of AI ought to be humans first.
Tech elites are inspiring in their view of the future; however, unless we understand what it means to be human in a world of artificial intelligence, it is humans who will suffer in the pursuit of progress.
Our academics and policymakers must endeavour to understand the first principles of being human. Tech elites should undertake the same line of inquiry. Will they? Not likely.
Companies are oriented toward profits and valuations. There are economic disincentives to pursue lines of thought that may inhibit progress. Furthermore, making knowledge of R&D progress public threatens competitive advantage.
The vast majority of AI and human augmentation technology is proprietary and hidden from the public; while public research bodies are underfunded or non-existent. The Australian Government falls well short on significant funding for AI R&D, let alone research on ethics.
The problem is complicated and requires a reasonable starting point; however, the burden is not on our generation alone. For thousands of years, philosophers and theologians have wrestled with similar problems, beginning with Plato and Aristotle. G.K. Chesterton called it “the democracy of the dead.” Those who have gone before must be included in a vote for the future.
The work of philosopher John Finnis offers one possible starting place, although he is just one of many possible philosophers, Finnis’ work warrants merit when talking about humans and AI.
In his book Natural Law and Natural Rights, he identifies seven basic goods including knowledge, play, and friendship. Human beings experience these goods uniquely and they are fundamentally good.
Finnis’ work brings to light the joy of being human, a far cry from the utilitarian ends of Neuralink brain implants or dopamine triggering AI. Such a foundation for thinking about humans offers the hope of producing technology that propagates good in humanity.
Further exploration of the work of philosophers, both ancient and modern, will illuminate both the leaders of tech development and public regulators. Those who understand humans best have the greatest chance of increasing human flourishing.
In an age of AI and human augmentation, failure to understand what it means to be human could be ruinous. Now is the time to re-examine that meaning, lest the tech elites drive forward in the belief that progress gives meaning; while we, the servile multitude, adopt every offering delivered up at press conferences and product releases.
Joshua Phillips is an engineer, consultant and contributor to the AI ethics debates.
Got something to add? Join the discussion and comment below.