Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

A reflection on artificial intelligence singularity

Source: bdtechtalks.com

Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?

Debates about the consequences of artificial general intelligence (AGI) are almost as old as the history of AI itself. Most discussions depict the future of artificial intelligence as either Terminator-like apocalypse or Wall-E-like utopia. But what’s less discussed is how we will perceive, interact with, and accept artificial intelligence agents when they develop traits of life, intelligence, and consciousness.

In a recently published essay, Borna Jalsenjak, scientist at Zagreb School of Economics and Management, discusses super-intelligent AI and analogies between biological and artificial life. Titled “The Artificial Intelligence Singularity: What It Is and What It Is Not,” his work appears in Guide to Deep Learning Basics, a collection of papers and treatises that explore various historic, scientific, and philosophical aspects of artificial intelligence.

Jalsenjak takes us through the philosophical anthropological view of life and how it applies to AI systems that can evolve through their own manipulations. He argues that “thinking machines” will emerge when AI develops its own version of “life,” and leaves us with some food for thought about the more obscure and vague aspects of the future of artificial intelligence.

AI singularity

Singularity is a term that comes up often in discussions about general AI. And as is wont with everything that has to do with AGI, there’s a lot of confusion and disagreement on what the singularity is. But a key thing that most scientists and philosophers agree that it is a turning point where our AI systems become smarter than ourselves. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion.

“Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve,” Jalsenjak writes.

To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat. Jalšenjak describes current AI systems as “domain-specific” such as “AI which is great at making hamburgers but is not good at anything else.” On the other hand, the kind of algorithms that is the discussion of AI singularity is “AI that is not subject-specific, or for the lack of a better word, it is domainless and as such it is capable of acting in any domain,” Jalsenjak writes.

This is not a discussion about how and when we’ll reach AGI. That’s a different topic, and also a focus of much debate, with most scientists in the belief that human-level artificial intelligence is at least decades away. Jalsenjack rather speculates of how the identity of AI (and humans) will be defined when we actually get there, whether it be tomorrow or in a century.

Is artificial intelligence alive?

There’s great tendency in the AI community to view machines as humans, especially as they develop capabilities that show signs of intelligence. While that is clearly an overestimation of today’s technology, Jasenjak also reminds us that artificial general intelligence does not necessarily have to be a replication of the human mind.

“That there is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid,” he writes in his essay’s footnote.

One of the greatest differences between humans and current artificial intelligence technology is that while humans are “alive” (and we’ll get to what that means in a moment), AI algorithms are not.

“The state of technology today leaves no doubt that technology is not alive,” Jalsenjak writes, to which he adds, “What we can be curious about is if there ever appears a superintelligence such like it is being predicted in discussions on singularity it might be worthwhile to try and see if we can also consider it to be alive.”

Albeit not organic, such artificial life would have tremendous repercussions on how we perceive AI and act toward it.

What would it take for AI to come alive?

Drawing from concepts of philosophical anthropology, Jalsenjak notes that living beings can act autonomously and take care of themselves and their species, what is known as “immanent activity.”

“Now at least, no matter how advanced machines are, they in that regard always serve in their purpose only as extensions of humans,” Jalsenjak observes.

There are different levels to life, and as the trend shows, AI is slowly making its way toward becoming alive. According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in today’s goal-oriented AI. The fact that the AI is not “aware” of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.

Another key factor for being considered alive is a being’s ability to repair and improve itself, to the degree that its organism allows. It should also produce and take care of its offspring. This is something we see in trees, insects, birds, mammals, fish, and practically anything we consider alive. The laws of natural selection and evolution have forced every organism to develop mechanisms that allow it to learn and develop skills to adapt to its environment, survive, and ensure the survival of its species.

On child-rearing, Jalsenjak posits that AI reproduction does not necessarily run in parallel to that of other living beings. “Machines do not need offspring to ensure the survival of the species. AI could solve material deterioration problems with merely having enough replacement parts on hand to swap the malfunctioned (dead) parts with the new ones,” he writes. “Live beings reproduce in many ways, so the actual method is not essential.”

When it comes to self-improvement, things get a bit more subtle. Jalsenjak points out that there is already software that is capable of self-modification, even though the degree of self-modification varies between different software.

Today’s machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted may AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they can’t find water at the surface of the ground.

An ideal self-improving AI, however, would be one that could create totally new algorithms that would bring fundamental improvements. This is called “recursive self-improvement” and would lead to an endless and accelerating cycle of ever-smarter AI. It could be the digital equivalent of the genetic mutations organisms go through over the span of many many generations, though the AI would be able to perform it at a much faster pace.

Today, we have some mechanisms such as genetic algorithms and grid-search that can improve the non-trainable components of machine learning algorithms (also known as hyperparameters). But the scope of change they can bring is very limited and still requires a degree of manual work from a human developer. For instance, you can’t expect a recursive neural network to turn into a Transformer through many mutations.

Recursive self-improvement, however, will give AI the “possibility to replace the algorithm that is being used altogether,” Jalsenjak notes. “This last point is what is needed for the singularity to occur.”

By analogy, looking at determined characteristics, superintelligent AIs can be considered alive, Jalsenjak concludes, invalidating the claim that AI is an extension of human beings. “They will have their own goals, and probably their rights as well,” he says, “Humans will, for the first time, share Earth with an entity which is at least as smart as they are and probably a lot smarter.”

Would you still be able to unplug the robot without feeling guilt?

Being alive is not enough

At the end of his essay, Jalsenjak acknowledges that the reflection on artificial life leaves many more questions. “Are characteristics described here regarding live beings enough for something to be considered alive or are they just necessary but not sufficient?” He asks.

Having just read I Am a Strange Loop by philosopher and scientist Douglas Hofstadter, I can definitely say no. Identity, self-awareness, and consciousness are other concepts that discriminate living beings from one another. For instance, is a mindless paperclip-builder robot that is constantly improving its algorithms to turn the entire universe into paperclips alive and deserving of its own rights?

Free will is also an open question. “Humans are co-creators of themselves in a sense that they do not entirely give themselves existence but do make their existence purposeful and do fulfill that purpose,” Jalsenjak writes. “It is not clear will future AIs have the possibility of a free will.”

And finally, there is the problem of the ethics of superintelligent AI. This is a broad topic that includes the kinds of moral principles AI should have, the moral principles humans should have toward AI, and how AIs should view their relations with humans.

The AI community often dismisses such topics, pointing out to the clear limits of current deep learning systems and the far-fetched notion of achieving general AI.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence