Artificial Intelligence: Beautiful or Terrifying?

To some, it is the future; to some, the end. Since its foundation as an academic discipline in 1955, Artificial Intelligence (AI) has always been one of the hottest topics of controversy around the world. Whether you love it or hate it or don’t care about it, artificial intelligence deserves the attention it demands.

For a name so popular, AI is often, unsurprisingly though, misunderstood. It is very tough to define AI, because it’s almost impossible to understand Intelligence in the first place. John McCarthy, father of AI discipline said, “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Though we usually assign learning, reasoning to the term intelligence, the underrated processes that we take for granted, such as- commonsense, general intelligence, perception etc, are also some integral parts of intelligence. As we continue to understand the concept of what makes us human, the definition of AI changes with it. 60 years ago, a digital computer or a simple robot could have been called AI, but now they are just like any other objects. This is known as the AI Effect- "AI is whatever hasn't been done yet." Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

AI Categories

Characteristics

REACTIVE MACHINES

(Type- I)

They are purely reactive, have no ability to form memories or to use past experiences to inform current decisions, can’t interactively participate in the world, will behave exactly the same way every time they encounter the same situation and the way we directly command it to.

Example- Deep Blue, IBM’s chess-playing supercomputer, which beat the great Garry Kasparov in 1997 or Google’s AlphaGo that beat top human Go experts.

LIMITED MEMORY

(Type- II)

They can look into the past to assess data to evaluate a particular mission for the time being, but can’t form memories (can’t learn from previous experiences).

Example- Self-driving cars.

THEORY OF MIND

(Type- III)

This is the important divide between the machines of today & the machines scientists are trying to build in the future. In psychology, “theory of mind” – means the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior. These machines can not only form representations about the world, but also about other agents or entities in the world.

SELF-AWARENESS

(Type- IV)

In the final step of AI development, researchers will have to not only understand consciousness, but build machines that have it. “I want that item” is a very different statement from “I know I want that item.” Conscious beings are self-aware, know about their internal states, and are able to predict feelings of others. These type-4 AI machines will be able to form representation about themselves, possess some sort of subconscious and consciousness and truly think, or at least hold the idea of thinking.

 

Frankly speaking, you don’t really need me much to tell you about the benefits and limitless opportunities AI can offer. As human civilization progresses (lots of arguments can be made against though) impressively, artificial intelligence can usher us into a magnificent future – infinite natural and artificial resources, incredible development in science & technology, unimaginable discoveries out there in the space, transformation of human race from a planetary species into an intergalactic one and stuffs like that. Well, of course there are no rules against dreaming, but it is still better to come back down to reality. While it is too early to talk about the wonders of future, right now AI is doing some pretty impressive staffs for humanity. Handling delicate medical operations, diagnosing & curing critical diseases, performing risky, physically demanding, highly sensitive technical jobs, providing effective digital security – AI is offering us greater accuracy, pinpoint precision, considerable speed, significant economic advantages in numerous sectors. But unlike the usual custom of bad getting spotted much faster than good, in the case of AI you need to dive deeper to find its dark sides.

Broadly speaking, you can divide its dangers in two general categories- Real and Improbable. Let’s get into some details.

Absolutely real threats (already happening to some extent)

Unemployment & Socio-economic Inequality: Science may be very beautiful and pure, but most of the times, it is money (along with power) that rules the real world. Let alone AI, just simple machines of todays are outperforming humans in specific physical and technical jobs, because they are basically programmed to do that and that only. Why do you think all the big companies (tech, industrial, anything) are pursuing all sorts of AI projects so desperately? It’s pure business: as AI develops, the need for human workers fall almost exponentially; less employees with more artificial efficiency equals significantly less cost equals a much bigger pile of profit for the remaining few big bosses.  And in this age of unprecedented capitalism where half of Earth’s net wealth belongs to the top 1%, do you really want to imagine the future that the uncontrolled advancement of AI can bring about, especially if you are not in that top portion?

Availability of automatic weapons with unimaginable destructive capability: You program a machine to do something specifically, but if it doesn’t go according to the plan, you can usually shut it off easily. But with AI, it can be much more difficult. Any automatic weapon loaded with AI technology which only has one goal- to kill/destroy, that can’t be easily turned off; because there won’t be any mortal being controlling it for you to just simply kill that guy, and if you can’t find a way to stop that weapon, it will only pause after it has completed its mission. Seems scary, right?

Violation of civil rights: A simple efficient but stupid machine can be given any task specifying certain limitations like no killing/attack/destruction under any circumstances and it is bound to follow your command. But AI fundamentally means being able to reprogram itself, at least to a limited extent, in order to maximize its chances of success. What if it somehow calculates that you are in its way and it chooses (reprograms) to bypass the ethical codes? And even if it kills you, how big a chance this event be classified as a murder instead of an unfortunate accident? Who will be the main culprit- the guy who made that machine or the machine itself? How can a guilty machine be punished- is it to be treated like a person or is a different set of laws necessary?

Downgrade of our originality, creativity, versatility: Ask yourselves, how many original (mind that word- original) inventions, discoveries, creations have you noticed the last 30 or 40 years? The Homo sapiens isn’t the smartest species because we are the most efficient (which we aren’t); it’s because we have the greatest thinking capability. Beyond any machine, any AI, beyond any object or species discovered, the human brain is the most advanced and most complex of all. And this brain needs practice, constantly; otherwise it gets dull. Internet of things, Automation, House robots, even the Google Assistant, Siri or Alexa- if these get all the works done for us, why do we need that mysterious little living machine within our thick skulls?

Near extinction of element of choice: You, driving a car, are moments behind a fatal collision that could be the end of either your friend or some other random guy and you know that unknown guy has a better chance of surviving; yet would you take that slightly higher chance of saving one of them or would you rather risk them both to just save the one you care about? Most people would definitely go with the second option; true that the decision may haunt them for the rest of their lives, but it’s all these choices we make, consciously or subconsciously, that define who we are. An AI controlled car doesn’t work that way; nor does any AI machine understand intuitions, crazy ideas, gut feeling. The unpredictability, the diversity of our inner nature- that’s the beauty of our world. Do we really want to live in a world where some other mindless objects decide our unique choices for ourselves?   

Privacy & security violation: You know what I am talking about, right? For the sake of comfort, convenience and laziness, we so carelessly pour out our dirty little secrets in the online world. And with AI already being in use everywhere in the virtual world and showing tremendous, almost unchecked improvement rapidly, the guys with the tools and the power can literally monitor your every activity. It’s very much true that AI can be and is, indeed, being used to flag and stop criminal activities efficiently; but in that pretext, it is also very much possible to use AI to manipulate the mass population in many ways.

Possible but unlikely threats (great staff for science fictions and conspiracy theories, but far-fetching ideas till now)

AI achieving subconscious: It’s hard to imagine what could be worse- a mindless machine or a machine that actually has a mind. Emotion is the subconscious intelligence; our instinct, feelings, desires, urges- all these are some kind of inner or outward expression of our emotions. To achieve consciousness, one must possess the idea of subconscious reality. If AI were to ever acquire this trait, it could very well be officially called an animal. Think about it: would an emotionally conscious AI machine always follow your command for your own purpose, or could it be selfish enough to understand its own feelings to meet its own end?

Conscious AI: “I want that item” is a very different statement from “I know I want that item.” Subconscious can rarely be controlled; but the very idea of consciousness is the sense of control over the choices we make, the decisions we take being aware of its potential consequences, reasons, purpose even if they go against our subconscious viewpoint. Right now, we use the pronoun ‘it’ for a machine because of its lifelessness, do you really think you can do the same for a machine that’s completely aware? What explicit line can be drawn there between a human and a machine?

AI threatening the existence of our species: Well, if the above two points do ever come true, this last point will no longer be unlikely, right? Every species strives to protect themselves, even if it comes at the cost of hundred more species. In a jungle, eventually, there can be only one king. Do we want to lose the crown of our kingdom, perhaps the only kingdom we will ever have?

A few days ago, I asked my friend if he considers AI as a blessing or a threat. He replied, “Blessed threat”. Artificial intelligence can help us in so many ways, but there is no denying that it is truly a massive threat, perhaps even more than a nuclear holocaust. Don’t get me wrong; AI is inevitable and we can use it to solve food crisis, global warming, social inequality, resource shortage and whatnot. We don’t have a problem with a machine even if it’s millions times more efficient than us, as long as it’s stupid (cannot act without our supreme command). But if the development of AI continues unchecked, unsupervised without very strong moral and ethical codes, standards, limitations, restrictions, then it can collide with the fundamental traits that make us human, even go against the very notion of humanity. We live in a time where a man jokingly dubs another man a machine as a compliment or as an insult. Well, how about a future where an AI calls his colleague a human in similar fashion?

Comments

Post a Comment

Popular posts from this blog

Inception: In the loop of Dream & Reality

Number 14: The Total Footballer