Possible Minds

Possible Minds: 25 Ways of Looking at AI by John Brockman is a collection of essays by leading AI researchers, artists, and philosophers. They all give their own view on the state/future of AI, as/and a reflection on The Human Use of Human Beings by Norbert Wiener. Each essay is quite different and here I’ve tried to summarise them.

One immediate thing I learned/was reminded about is that technology itself will not be a force for good or bad, it is culture that does this. Technology only enables it. (update: from a podcast about Weigers in China, DNA testing can help with your ancestry, it can also enable mass surveillance and profiling)

1. Seth Lloyd: Wrong, but More Relevant Than Ever

Seth Lloyd is a theoretical physicist at MIT, Nam P. Suh Professor in the Department of Mechanical Engineering, and an external professor at the Santa Fe Institute.

“Wiener’s central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behavior. When feedback loops break down, the system goes unstable.”

Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation. Many obstacles and some innovations can be anticipated, but more cannot. In my own work with experimentalists on building quantum computers, I typically find that some of the technological steps I expect to be easy turn out to be impossible, whereas some of the tasks I imagine to be impossible turn out to be easy. You don’t know until you try.”

Raw information-processing power does not mean sophisticated information-processing power. While computer power has advanced exponentially, the programs by which computers operate have often failed to advance at all.”

“As machines become more powerful and capable of learning, they learn more and more as human beings do—from multiple examples, often under the supervision of human and machine teachers. Education is as hard and slow for computers as it is for teenagers. Consequently, systems based on deep learning are becoming more rather than less human. The skills they bring to learning are not “better than” but “complementary to” human learning: Computer learning systems can identify patterns that humans can not—and vice versa.”

2. Judea Pearl: The Limitations of Opaque Learning Machines

Judea Pearl is a professor of computer science and director of the Cognitive Systems Laboratory at UCLA. His most recent book, co-authored with Dana Mackenzie, is The Book of Why: The New Science of Cause and Effect.

Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence.”

“Homo sapiens… create and store a mental representation of their environment, interrogate that representation, distort it by mental acts of imagination, and finally answer the “What if?” kinds of questions. Examples are interventional questions (“What if I do such-and-such?”) and retrospective or counterfactual questions (“What if I had acted differently?”). No learning machine in operation today can answer such questions.”

I view machine learning as a tool to get us from data to probabilities. But then we still have to make two extra steps to go from probabilities into real understanding—two big steps. One is to predict the effect of actions, and the second is counterfactual imagination. We cannot claim to understand reality unless we make the last two steps.”

3. Stuart Russell: The Purpose Put into the Machine

Stuart Russell is a professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley. He is the co-author (with Peter Norvig) of Artificial Intelligence: A Modern Approach.

“Putting a purpose into a machine that optimizes its behavior according to clearly defined algorithms seems an admirable approach to ensuring that the machine’s “conduct will be carried out on principles acceptable to us!” But, as Wiener warns, we need to put in the right purpose.”

“The technical term for putting in the right purpose [Midas problem] is value alignment. When it fails, we may inadvertently imbue machines with objectives counter to our own. Tasked with finding a cure for cancer as fast as possible, an AI system might elect to use the entire human population as guinea pigs for its experiments.”

“AI research, in its present form, studies the ability to achieve objectives, not the design of those objectives.”

He mentions some objections he then also refutes:

  • Don’t worry we can just switch it off (if AGI, will be smart enough)
  • Human-level or superhuman AI is impossible (see nuclear bombs)
  • It’s too soon to worry about it (not predictable, but start sooner than later)
  • Human-level AI isn’t really imminent, in any case (ditto, not predictable with any certainty, but physically possible)
  • You’re just a Luddite (major technologists argue for safety)
  • Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (from ‘the world’ you can’t ‘see’ our objectives, Bostrom‘s paperclip example)
  • Intelligence is multidimensional, “so ‘smarter than humans’ is a meaningless concept (kinda true, but still no reason it won’t happen)

“A more precise definition is given by the framework of cooperative inverse-reinforcement learning, or CIRL

4. George Dyson: The Third Law

George Dyson is a historian of science and technology and the author of Baidarka: The Kayak, Darwin Among the Machines, Project Orion, and Turing’s Cathedral.

“He likes to point out that analog computing, once believed to be as extinct as the Differential Analyzer, has returned. He argues that while we may use digital components, at a certain point the analog computing being performed by the system far exceeds the complexity of the digital code with which it is built. He believes that true artificial intelligence—with analog control systems emerging from a digital substrate the way digital computers emerged out of analog components in the aftermath of World War II—may not be as far off as we think.”

Digital computers execute transformations between two species of bits: bits representing differences in space and bits representing differences in time.”

Analog computers also mediate transformations between two forms of information: structure in space and behavior in time”

‘This [digital vs analog] is starting to change: from the bottom up, as the threefold drivers of drone warfare, autonomous vehicles, and cell phones push the development of neuromorphic microprocessors that implement actual neural networks, rather than simulations of neural networks, directly in silicon (and other potential substrates); and from the top down, as our largest and most successful enterprises increasingly turn to analog computation in their infiltration and control of the world.”

Nowhere is there any controlling model of the system except the system itself.” (model itself is the system, can’t be reduced or ‘controlled’)

“Before you know it, your system will not only be observing and mapping the meaning of things, it will start constructing meaning as well. In time, it will control meaning, in the same way the traffic map starts to control the flow of traffic even though no one seems to be in control.”

Three laws of robotics (just kidding), of artificial intelligence:

  1. Any effective control system must be as complex as the system it controls (Ashby’s Law)
  2. The simplest complete model of an organism is the organism itself (Von Neumann). Trying to reduce the system’s behavior to any formal description makes things more complicated, not less
  3. Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand (there is a loophole in the third law. It is entirely possible to build something without understanding it)

Provably “good” AI is a myth. Our relationship with true AI will always be a matter of faith, not proof.”

“We worry too much about machine intelligence and not enough about self-reproduction, communication, and control.”

5. Daniel C. Dennett: What Can We Do?

Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained and, most recently, From Bacteria to Bach and Back: The Evolution of Minds.

(quoting Weiner) “[I]n the long run, there is no distinction between arming ourselves and arming our enemies.” The information age is also the dysinformation age”

[W]e’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations.”

AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits.* These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals.”

We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.”

6. Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into

Rodney Brooks is a computer scientist; Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). He is the author of Flesh and Machines.

(John Brockman) “[H]e is alarmed by the extent to which we have come to rely on pervasive systems that are not just exploitative but also vulnerable, as a result of the too-rapid development of software engineering—an advance that seems to have outstripped the imposition of reliably effective safeguards.”

“We rely on computers for our banking, our payment of bills, our retirement accounts, our mortgages, our purchasing of goods and services—these, too, are all vulnerable.”

“Humankind has gotten itself into a fine pickle: We are being exploited by companies that paradoxically deliver services we crave, and at the same time our lives depend on many software-enabled systems that are open to attack.”

“Moral leadership is the first and biggest challenge.”

7. Frank Wilczek: The Unity of Intelligence

Frank Wilczek is Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and the author of A Beautiful Question: Finding Nature’s Deep Design.

In asking if AI can be conscious, creative, and/or evil, Wilczek answers yes. “Evidence from those fields makes it overwhelmingly likely that there is no sharp divide between natural and artificial intelligence.”

Talking about the ‘Astonishing Hypothesis’ that mind emerges from matter. “People try to understand how minds work by understanding how brains function; and they try to understand how brains function by studying how information is encoded in electrical and chemical signals, transformed by physical processes, and used to control behavior.”

No one has ever stumbled upon a power of mind that is separate from conventional physical events in biological organisms.”

“… natural intelligence is a special case of artificial intelligence.” He calls it the ‘astonishing corollary’.

” Human mind emerges from matter. Matter is what physics says it is. Therefore, the human mind emerges from physical processes we understand and can reproduce artificially. Therefore, natural intelligence is a special case of artificial intelligence.”

We have been upgrading/enhancing our intelligence for thousands of years. First with fire, glasses, clothing. Now with phones, internet, X-ray. All these enhancements can be covered in six factors: speed, size, stability, duty cycle, modularity, quantum readiness.

Human brains are still better than machines at: three-dimentionality, self-repair, connectivity, development, integration.

“If that’s right, we can look forward to several generations during which humans, empowered and augmented by smart devices, coexist with increasingly capable autonomous AIs.”

8. Max Tegmark: Let’s Aspire to More Than Making Ourselves Obsolete

Max Tegmark is an MIT physicist and AI researcher, president of the Future of Life Institute, scientific director of the Foundational Questions Institute, and the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence.

Consciousness is the cosmic awakening; it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty, hope, meaning, and purpose.”

“But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there’s no law of physics that says one can’t build machines more intelligent in every way than we are, and able to seed cosmic life.”

Tegmark argues that we’ve been outsourcing/inventing our ways out of 1) natural processes (heat/light/mechanical power), 2) then discovered that our bodies are also (biological) machines, and 3) started building machines that outshine us in cognitive tasks too.

“The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines, so anyone claiming that “people will always find new well-paying jobs” is in effect claiming that AI researchers will fail to build AGI.”

Homo sapiens is by nature curious, which will motivate the scientific quest for understanding intelligence and developing AGI even without economic incentives.”

“I’m advocating a strategy change from “Let’s rush to build technology that makes us obsolete—what could possibly go wrong?” to “Let’s envision an inspiring future and steer toward it.””

  1. An arms race in lethal autonomous weapons should be avoided.
  2. The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  3. Investments in AI should be accompanied by funding for research on ensuring its beneficial use. . . . How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

“[T]he real risk with AGI isn’t malice but competence.”

This mistakenly equates intelligence with morality. Intelligence isn’t good or evil but morally neutral. It’s simply an ability to accomplish complex goals, good or bad.”

“Let’s create our own meaning, based on something more profound than having jobs. AGI can enable us to finally become the masters of our own destiny. Let’s make that destiny a truly inspiring one!”

9. Jaan Tallinn: Dissident Messages

Jaan Tallinn, a computer programmer, theoretical physicist, and investor, is a co-developer of Skype and Kazaa. In 2012, he co-founded the Centre for the Study of Existential Risk—an interdisciplinary research institute that works to mitigate risks “associated with emerging technologies and human activity”

“As predicted by Turing, once we have superhuman AI (“the machine thinking method”), the human-brain regime will end. Look around you—you’re witnessing the final decades of a hundred-thousand-year regime.”

Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.”

(quoting Yudkowsky, blog) “[A]sking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth. There would indeed be effects, but you’d be missing the point.”

“… superintelligent AI is an environmental risk

Tallinn argues that we pity humans fit nicely within the nice confines of Earth (although we have shaped it to our liking, think airconditioning). But that AI is very much able to survive in a much wider range of environments (e.g. deep space).

10. Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas

Steven Pinker, a Johnstone Family Professor in the Department of Psychology at Harvard University, is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He is the author of eleven books, including The Blank Slate, The Better Angels of Our Nature, and, most recently, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

A healthy society—one that gives its members the means to pursue life in defiance of entropy—allows information sensed and contributed by its members to feed back and affect how the society is governed. A dysfunctional society invokes dogma and authority to impose control from the top down.”

“The possibility that machines threaten a new fascism must be weighed against the vigor of the liberal ideas, institutions, and norms… The flaw in today’s dystopian prophecies is that they disregard the existence of these norms and institutions, or drastically underestimate their causal potency.”

“The reason is that almost all the variation across time and space in freedom of thought is driven by differences in norms and institutions and almost none of it by differences in technology.”

What I get from this is that technology is agnostic and that how we use it (norms/culture) will determine if it will be used for good or bad. Pinker argues that we/activists should focus on things like the laws, not the technology.

Pinker also dismisses the competent-but-stupid AI scenarios where the AI is very good at completing a goal, but doing this too literal (e.g. making everyone happy by installing dopamine drips). He argues that intelligence (as a broad concept) will consist of several parts that ‘grow’ together, and thusly an AI that will be able to do large things in the world, will also be one that is ‘smart’ enough to not ‘hack’ the goal. (I’m not totally sure about this line of argument and I think especially Nick Bostrom wouldn’t agree).

Rates of industrial, domestic, and transportation fatalities have fallen by more than 95 (and often 99) percent since their highs in the first half of the 20th century.* Yet tech prophets of malevolent or oblivious artificial intelligence write as if this momentous transformation never happened and one morning engineers will hand total control of the physical world to untested machines, heedless of the human consequences.”

11. David Deutsch: Beyond Reward and Punishment

David Deutsch is a quantum physicist and a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University. He is the author of The Fabric of Reality and The Beginning of Infinity.

(about humans in the past) “Moreover, this must have been knowledge in the sense of understanding, because it is impossible to imitate novel complex behaviors like those without understanding what the component behaviors are for.”

“Such knowledgeable imitation depends on successfully guessing explanations, whether verbal or not, of what the other person is trying to achieve and how each of his actions contributes to that—for instance, when he cuts a groove in some wood, gathers dry kindling to put in it, and so on.”

“No nonhuman ape today has this ability to imitate novel complex behaviors. Nor does any present-day artificial intelligence. But our pre-sapiens ancestors did”

“Any ability based on guessing must include means of correcting one’s guesses, since most guesses will be wrong at first. (There are always many more ways of being wrong than right.) Bayesian updating is inadequate, because it cannot generate novel guesses about the purpose of an action, only fine-tune—or, at best, choose among—existing ones. Creativity is needed. As the philosopher Karl Popper explained, creative criticism, interleaved with creative conjecture, is how humans learn one another’s behaviors, including language, and extract meaning from one another’s utterances”

“So everyone had the same aspiration in life: to avoid the punishments and get the rewards. In a typical generation, no one invented anything, because no one aspired to anything new, because everyone had already despaired of improvement being possible.” (more in From Bacteria to Bach and Back, Daniel Dennett)

The worry that AGIs are uniquely dangerous because they could run on ever better hardware is a fallacy, since human thought will be accelerated by the same technology.” (very much opposing many others who see AI as dangerous, although in many cases they are talking about two different things, Deutsch is talking specifically about creative AGI)

12. Tom Griffiths: The Artificial Use of Human Beings

Tom Griffiths is Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By.

“But if you want to know why the driver in front of you cut you off, why people vote against their interests, or what birthday present you should get for your partner, you’re still better off asking a human than a machine. Solving those problems requires building models of human minds that can be implemented inside a computer—something that’s essential not just to better integrate machines into human societies but to make sure that human societies can continue to exist.”

Making inferences can be very difficult. If you prefer dessert, will your AI now buy you only desserts? Knowing what humans want (in how far we really know it ourselves) will be a very big challenge.

“One of the tools used for solving this problem is inverse-reinforcement learning. Reinforcement learning is a standard method for training intelligent machines. By associating particular outcomes with rewards, a machine-learning system can be trained to follow strategies that produce those outcomes.”

“If you’re trying to make inferences about the rewards that motivate human behavior, the generative model is really a theory of how people behave—how human minds work. Inferences about the hidden causes behind the behavior of other people reflect a sophisticated model of human nature that we all carry around in our heads. When that model is accurate, we make good inferences.”

“[W]hen it comes to understanding the human mind, these two goals—accuracy and generalizability—have long been at odds with each other. … Ultimately, what we need is a way to describe how human minds work that has the generalizability of rationality and the accuracy of heuristics.”

“To develop a more realistic model of rational behavior, we need to take into account the cost of computation. Real agents need to modulate the amount of time they spend thinking by the effect the extra thought has on the results of a decision.” The model used for this is called ‘bounded-rationality’.

Human beings are an amazing example of systems that act intelligently despite significant computational constraints. We’re quite good at developing strategies that allow us to solve problems pretty well without working too hard. Understanding how we do this will be a step toward making computers work smarter, not harder.”

13. Anca Dragan: Putting the Human into the AI Equation

Anca Dragan is an assistant professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She co-founded and serves on the steering committee for the Berkeley AI Research (BAIR) Lab and is a co-principal investigator in Berkeley’s Center for Human-Compatible AI.

“At the core of artificial intelligence is our mathematical definition of what an AI agent (a robot) is. When we define a robot, we define states, actions, and rewards.” The goal of an AI is to get the highest cumulative reward.

We have been doing quite well with this definition. “But with increasing AI capability, the problems we want to tackle don’t fit neatly into this framework. We can no longer cut off a tiny piece of the world, put it in a box, and give it to a robot.

“So to anticipate human actions, robots need to start understanding human decision making. And that doesn’t mean assuming that human behavior is perfectly optimal; that might be enough for a chess- or Go-playing robot, but in the real world, people’s decisions are less predictable than the optimal move in a board game.” Here I think she is referencing (implicitly) to Judea Pearl and David Deutsch, who argue that this ‘understanding/predicting’ is now not available/possible in current AI systems.

“Finally, just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people but people will need good mental models of robots.

“In general, humans have had a notoriously difficult time specifying exactly what they want, as exemplified by all those genie legends. An AI paradigm in which robots get some externally specified reward fails when that reward is not perfectly well thought out. It may incentivize the robot to behave in the wrong way and even resist our attempts to correct its behavior, as that would lead to a lower specified reward.”

What Anca argues for is that we should have AI that reasons about us. I think this is the right solution, but also the most difficult one. We are bad at it, there are different reasons/preferences between people. It will be a tough cookie to crack.

14. Chris Anderson: Gradient Descent

Chris Anderson is an entrepreneur; former editor-in-chief of Wired; co-founder and CEO of 3DR; and author of The Long Tail, Free (book), and Makers.

Chris’ story starts with one about mosquito’s that follow a gradient descent when they are searching for you. The stronger the smell, move in that direction (an algorithm). He argues that almost everything around us is driven by gradient descent (hunger, sleepiness, etc).

He talks more about local minima (finding a solution, but that a better one might be over the next ‘hill’). One thing you would probably need is a (mental) map.

“We’re going to rock ourselves out of local minima and find deeper minima, maybe even global minima. And when we’re done, we may even have taught machines to seem as smart as a mosquito, forever descending the cosmic gradients to an ultimate goal, whatever that may be.”

15. David Kaiser: “Information” for Wiener, for Shannon, and for Us

David Kaiser is Germeshausen Professor of the History of Science and professor of physics at MIT, and head of its Program in Science, Technology and Society. He is the author of How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival and American Physics and the Cold War Bubble (forthcoming).

“Wiener borrowed this insight when composing Human Use. If information was like entropy, then it could not be conserved—or contained.” One key idea here is that if something is known somewhere, you can’t stop others from learning it (only possibly delay it a bit). “(from Weiner) [T]he fate of information in the typically American world is to become something which can be bought or sold.”

(hmm I guess I didn’t find too many nuggets of information (a term there defined in a few ways) in this piece).

16. Neil Gershenfeld: Scaling

A

17. W. Daniel Hillis: The First Machine Intelligences

A

18. Venki Ramakrishnan: Will Computers Become Our Overlords?

A

19. Alex “Sandy” Pentland: The Human Strategy

A

20. Hans Ulrich Obrist: Making the Invisible Visible: Art Meets AI

A

21. Alison Gopnik: AIs Versus Four-Year-Olds

A

22. Peter Galison: Algorists Dream of Objectivity

A

23. George M. Church: The Rights of Machines

A

24. Caroline A. Jones: The Artistic Use of Cybernetic Beings

A

25. Stephen Wolfram: Artificial Intelligence and the Future of Civilization

A

And from one very good review I’m going to copy another view:

Nobody knows
It has been proven nigh impossible to predict where the scientific progress or humanity was headed even when developments – of any sort – were stable. The exercise is more futile now given the pace of change with new technologies. With rising complexities rise the future potentialities. Almost anything, everything, and nothing predicted and predicated is possible sometime or the other in the next century. The wide variety of views present in the book have the brightest minds talking past each other partly because the history and experience they cite are useless in providing projections of what could lay ahead. Differing meanings of the terms used as explained in a couple of sections below also contribute to the extent of disagreements, as is commonplace amongst philosophers of any ilk for millennia. 

Machines will surpass humanity
Most of the contributors seem to agree that there will be hardly any human skills and faculties where our technology creations will remain inferior forever. None of the contributors resorts to discussions on the soul or divine entity to justify our perpetual supremacy. Our ability to sense causality, impute a purpose and our apparent consciousness are seen by a handful that will keep humanity ahead, but none of these commentators expects any humanity trait supremacy to last forever. Let’s use a bad analogy: if our natural procreation, our children, grow up to develop own purpose, outgrow their parents in many skills and at times develop the willingness to act against their creators, will machines surely never go along that path? The point behind this lousy analogy is that our silicon creations will grow exponentially forever for decades and centuries if not millennia to come. A few years ago, the pessimists used to cite their inability to recognize a cat or a face through optical sensors as one reason humans would remain superior for a long time. Machines have surpassed humans on sound and face recognition in a few short years. They may walk or run better than us next even if Robots’ plodding appear clunky and laughable today (note: Boston Dynamics, already doing very well), for instance. And, machines as a unit may also learn to ascribe purposes too or exhibit complexities that make our consciousness look like that of a cardboard box in a few decades. It is difficult to pick a set of human aspects that will remain superior for the next hundred years. 

Subpoint: Machines will have its own goals and purposes 
It is likely that consciousness is nothing but an emergent quality from many neurons interacting with each other just the way fluidity is from water molecules or planetary forces are from rocks coming together. What we humans imply through words like beauty, art, goals, purpose are possible emergent qualities from the numerosity of the underlying components and their complex interactions or interrelations. If today’s machines can only code, crunch data or uncover hidden patterns but cannot define their own “ultimate” utility functions, the “ultimate” stage set by humans is being pushed back with the machines working out the rest on their own. It is not ridiculous to assume that what we deem as exotic human qualia – goals, consciousness, beauty, etc. – will also fall prey to the ever-growing machine abilities if they prove nothing but emergent qualities of complex computational techniques.

The pessimistic forecasts are far more compulsive reads
There is no reason AI/AGI/technology progress should make humanity useless, subservient, or extinct for centuries, even if it is a long-term inevitability. As we discuss above, no-one knows! That said, the cases of the optimists – i.e., those who mostly believe that the positives of technology boom would far outweigh any attendant harmful impacts – appear lame compared to the pessimists. Once again, the optimists do not have to be wrong, but the stage belongs to those with scary stories. In the 25 views, you read, the most frightening are by far the most compelling. The trend tells us about what gets our goat and stirs us to action. That said, the pessimists appear more right because almost all optimists’ cases base their case off the dire forecasts that did not prove right historically rather than paint any upcoming utopia they have in their mind.  The optimists rest their case on grandmotherly adages like this time is not different while pessimists point to horses who thought they would continue to carry humans forever in transport based on a few thousand years of history but became the showcase items. (mandatory CGP Grey video)

Terms without precise meanings and predictions that are too static
With the band of new philosophers and heavy thinkers in this compendium, there are dozens of commonly used terms including AI, AGI, co-existence, etc. with no precise meanings or with multiple meanings. AI appears to be perpetually something that is a technology of tomorrow, never mind that what we have today would have likely surpassed any definition of most scientists’ AI a few decades ago. The way we use our smart devices, even a person in the late nineties would claim we already co-exist with our gadgets now. The field does not need its Wittgenstein to prove how these thinkers are talking different languages; the technology world is moving far too quickly for the best thinkers to take decades to agree on the underlying meaning of terms. Readers have to distil the views themselves, keeping in mind the plethora of different meanings and time-frames used by the writers while talking about the same subject. (I find this a very good point, we are already living with AI in many forms, and AGI is, I believe, not something that will happen at moment X, it will be different skills/intelligences at different moments) (It also makes me think of a flood raising higher and higher, and some skills are higher upon some mountains)

Multiple dystopias
This reviewer can categorize the doomsayers in at least three different buckets: 
a. what will we do? If machines do everything better, will humans be next dogs, better sitting pretty at home than trying to work on anything? If that’s the case, how will the machines/rest of the working world bear the burden of a rising hoard of the unemployed? How will this unemployed lot live life or find purpose? 
b. Will we have any free will? As machines understand us faster and better than we can, and continuously act to change our behaviour, will we have any power to stand up against the big brother – be it a set of corporates, governments or machines – converting us into its zombies? Will we be just like our stone-aged forefathers or animals with what were unfathomable massive natural forces for them become machine controls for us?
c. if machines/AGI change the world to make it more suitable for their existence, will humans go extinct? Will machines feel the need to euthanize our race for their purpose someday?
With the rising concerns or privacy and security, most contributors’ AGI dystopia worries focus more around the second category concerns at present. If economic cycles turn, the first category pessimists may get more hearing, even though they are the ones most laughed off based on historical antecedents now. The third category doomsayers will carry the sensationalist tag until it becomes too late assuming that day is in our future at some point.

View 1: think tanks will not work
Let’s say that humanity’s primary goal against AI is a guaranteed survival and continued dominance. We want at least some of us to remain the ultimate overlord of this planet. This requires suppression of some AI-developments or at least a close monitoring. Many groups have been formed globally with the right objectives in mind, but such think tanks are slow moving entities with little power to make an immediate difference. It is likely that by the time some of their suggestions are enacted, the AI-world might have already skirted the underlying issues with many more of different varieties turning more critical. These groups are playing an important role in highlighting the problems at hand unbiasedly, but they are unlikely to make a real difference on their own.

View 2: the best solution could be fighting of iron with iron!
In a free-wheeling technocratic world, the best solutions will emerge from competing entities. It is likely that despite the cries from those with extreme views, no “kill switch” will rise into existence for any AI at humanity level. The more “the good” who follow laws are suppressed at some place, the more will be the powers of some “bad” at some other place. This topic is controversial and requires an extended essay on its own, so perhaps not for this review!