It’s difficult to know what Teilhard would have thought about the coevolution of humanity and technology today. As perhaps the most prescient forecaster of the future of humanity during his own time, he anticipated much about the all-encompassing role of thinking machines we’re experiencing now.
He wrote the following passages in the section of Formation of the Noosphere titled “The Cerebral Apparatus”. One must keep in mind that they were written in 1947, when only a few computers – massive machines with relatively limited capabilities – existed in the world. It follows a description of the way telecommunications technologies such as radio and television were uniting people around the world:
But I am also thinking of the insidious growth of those astonishing electronic computers which, pulsating with signals at the rate of hundreds of thousands a second, not only relieve our brains of tedious and exhausting work but, because they enhance the essential (and too little noted) factor of “speed of thought,” are also paving the way for a revolution in the sphere of research…
…There is a school of philosophy which smiles disdainfully at these and kindred forms of progress. “Commercial machines,” we hear them say, “machines for people in a hurry, designed to gain time and money.” One is tempted to call them blind, since they fail to perceive that all these material instruments, ineluctably linked in their birth and development, are finally nothing less than the manifestation of a particular kind of super-Brain, capable of attaining mastery over some supersphere in the universe and in the realm of thought.
While Teilhard foresaw much of the potential for computers to expand and accelerate human thought, to what degree did his vision encompass artificial intelligence? The official birth of AI as a distinct field of study occurred at the Dartmouth Summer Research Project on Artificial Intelligence in 1956, the year after Teilhard’s death. It has come a long way since its inception, though its potential and how we should address its role in the noosphere remains a topic of hot debate.
As with all aspects of technology, AI has the potential for great good and real harm. It is already ever-present in our lives in the form of recommendation engines, which track us everywhere and observe our choices, desires, and preferences, then attempt to manipulate our behavior based on what they learn.
Decisions we make now about whether and how we implement AI in various aspects of our lives will have consequences far into the noosphere’s future. In his 2019 book, Human Compatible: Artificial Intelligence and the Problem of Control, Stuart Russell explores those issues and consequences. We recently asked him to discuss these ideas with fellow UC Berkeley professor Terrence Deacon (an advisor to Human Energy) and Science of the Noosphere’s David Sloan Wilson.
Central to Stuart Russell’s perspective on AI is the role of human preferences. Near the beginning of the discussion, he explains why he considers them so important:
A topic that pervades much of my recent thinking about AI and its role is what human preferences are. Version zero of the theory is that roughly speaking, humans do have preferences about the future. There are futures we want to avoid, such as extinction and enslavement and various other dystopias, and futures that we would like to bring about. And this concept of the noosphere is really important to that because we’re not born with these preferences, they result from our immersion in the noosphere. And so understanding the dynamics of that is extremely important because to some extent, our preferences about the future end up determining what future we get.
The following conversation is presented in three parts. Preceding each video clip is a description of some of the main points of discussion. Under each video clip is a link to a full transcript of the conversation.
Artificial Intelligence in the Noosphere: Part One Transcript
Part One Notes
David asks both Stuart and Terry to introduce themselves and backgrounds, which they do.
2:08 — David introduces the topic of Teilhard’s interest in the role of technology in human evolution, and the evolution of the noosphere. He asks them to comment on that idea, with the goal of global cooperation in mind.
3:25 — Terry introduces that human cognition is a collective phenomenon of our species: “… all of our cognitive capacities depend upon being embedded in this culture, held together by language and other forms of communication. So that in a large respect, we are not islands of cognition, we’re this linked cognition.” David mentions language is a technology, and Terry agrees. He says that Teilhard realized early on that technology was bringing humans closer together in a convergence of thought. Teilhard’s “notion of the Noosphere was that over time, this convergence would grow and grow and grow.”
5:34 — Terry brings these thoughts into the realm of AI, which is becoming “intermingled with and entangled with this process of human communication, playing more and more of a role.” He ends by wondering how we’re going to control AI, in a variety of realms.
7:29 — Stuart describes different roads they could take in the conversation. The first is human preferences regarding the future. “There are futures we want to avoid, such as extinction and enslavement and various other dystopias, and futures that we would like to bring about.”
8:28 — Stuart explains the “concept of the Noosphere is really important to that because we’re not born with these preferences, they result from our immersion in the Noosphere. And so understanding the dynamics of that is extremely important because to some extent, our preferences about the future end up determining what future we get. We may not get the one we actually prefer, but our preferences control the way we act and one way or another, that ends up controlling the future.”
9:10 — Stuart explains his point that the current version of AI contributing “additional thinkers in the Noosphere, but it’s dramatically affecting the connectivity of the thought processes that happen in the Noosphere, and it’s doing that primarily through recommended systems operating in social media.” He explains in some detail why that gives social media more control over society than historical tyrants. That degree of social control has “a huge effect on how the Noosphere operates.”
12:14 — Stuart explains why future versions of more powerful AI might be much more concerning, including “the problem of control. So if you’re making systems that are more powerful than human beings, how do human being retain power over those systems forever? And that almost sounds like an impossibility.”
13:41 — Stuart expresses a final concern: “even if we do retain control, we are then faced with…our real, our permanent problem, which is how to live wisely, agreeably and well in the absence of any forces that arise from economic necessity.” He explains what he sees as the problematic consequences of that.
14:52 — David asks Stuart to project a possible dystopian outcome. Stuart describes a group he worked with to come up with the opposite—a utopian vision for general purpose AI. He mentions socioeconomic arrangements as the main barrier to a positive vision, “because you keep getting stuck on this fact that the economic forces will end up taking away all the economic roles from humans.”
16:12 — Stuart describes the challenges of adapting human society and individual lives in a world of general AI. He suggests some of the big social and economic questions that underlie the problems, and says, “We don’t understand the beginnings of the answers to those questions, but those are the answers we’re going to need to have in the future, and it’s going to take a long time for us to get from here to there, and we may not finish in time, and so that does worry me.”
18:15 — Terry introduces the topic of behaviorism. He describes basic behaviorist theory, then uses that as the basis for another “side of the AI story as the perfect behaviorist trainer of human beings.” He explains how that works, and offers the Chinese mass surveillance and social credit system as one dystopian example, and Facebook, Google, and Amazon as commercial ones, and ends with, “As a means of manipulating and controlling, it seems to be an almost inevitable future.”
Artificial Intelligence in the Noosphere: Part Two Transcript
Part Two Notes
David opens this segment by describing the Amish approach to technology. They are not anti-technology, as many believe, but before adopting any new technology they examine it in the framework of how it will affect the community at large. David explains the evolutionary significance of this systems level approach.
1:54 — David explains why we need to ultimately apply that systems level approach to technology—asking whether it’s good for the community at large—to the whole earth. “What it leads to, I think, inevitably, is a whole earth ethic, in which when we evaluate our options and when we train our AI systems, then that’s what we need to train them to do.”
3:01 — Stuart returns to the topic of behaviorism, presenting a different view than Terry’s. Some AI models do involve internal functions such as goals and beliefs. Also, recommendation algorithms “simply function to manipulate individuals towards whatever is the closest point where they’re maximally predictable.”
5:31 — Stuart explains why he thinks why he thinks behaviorist tendencies in the AI community are really harmful. They build AIs to optimize “objectives that originally we put in like click through or eyeballs or engagement, but objectives that are not complete and correct pictures of our preferences about the future, but typically very narrow, very partial. And optimizing those objectives ends up wrecking everything else, and I think that’s what we’re seeing.” He compares that risk to the fossil fuel industry and climate change.
7:30 — David and Stuart have an exchange about laissez-faire capitalism, with or without technology. Stuart brings up the problem of accounting for externalities, and relates to climate change again.
8:52 — Stuart returns to the problems of a behaviorist approach, stating, “it’s not that the machines will decide they don’t like us and want to destroy us, they’re just carrying out the objectives that we put in. But that whole approach, the behaviorist approach, if you like, of saying, this is the objective, and we’re just going to build machines that should optimize it, it just doesn’t work.” He ends on the cautionary note of “wireheading” experiments, where rats are able to directly control stimulation to their pleasure centers, and end up dying as a result.
10:25 — David inserts that’s an example of what Skinner called selection by consequences. Stuart responds that this doesn’t line up with evolutionary success. In his book, he advocates “ getting rid of this way of thinking about AI altogether, the idea that we specify objectives, we put them into machines, the machines optimize the objectives. That’s a mistake, it’s a seductive mistake because it looks like you’re getting technology to do your bidding, but the world’s cultures are full of stories saying it’s a mistake.”
12:03 — Terry describes one such story, the golem, a magical automaton of sorts. Like AI, it’s programmed by people to accomplish a goal, but lacks discernment and unintended consequences can result. He thinks Stuart’s issues about control and David’s description of the Amish approach to technology have in common a cautionary imperative: “No, wait a minute, before we do this, let’s get a sense of it.” The problem is that we don’t have the predictive power to do that with AI and other technological challenges we face. He explains that further, and ends with “there’s an AI hacking cold war developing, as there is, and Stuart has a lot to say about AI weaponry and the cold war, that potentially could be a hot war of AI weaponry as well. It’s because AI is probably also the tool we need to anticipate the effects of AI, and that, I think, is a really interesting challenge.”
17:19 — David brings in the subject of complex adaptive systems, and the two types, CAS1 AND CAS2. “Meaning number one is a complex system that’s adaptive as a system. Meaning number two is a complex system composed of agents following their respective adaptive strategies. And when it’s the second complex system, then it’s never going to work well at the whole system level.” He relates that to AI, explaining that, “A CAS2 system gets you your hacking AI, of course, it’s oppositional, it’s agents in conflict with each other, that will never function well as a system. And the more AI improves those competitive and disruptive skills, then the more disruptive the system will get.”
18:19 — David explains, “The only solution to the problem of unforeseen consequences is humble experimentation. We need to make our best guesses, and then we need to try it out…assess the consequences with respect to what we’re trying to maximize, and…do that again and again and again. And if our target of selection is the whole system…CAS1 system that functions well as a system…if we take a humble experimental approach…there’s our best chance to achieve our goals.” He ends with an explanation of why we need to make humans an integrated part of such systems.
20:39 — Stuart agrees with the need for experimentation, and says that there’s a need for theoretical understanding to drive experimentation. He describes a “probably beneficial” AI system he’s been working on with colleagues. It doesn’t know the human preferences it’s supposed to be satisfying. He describes how they’re experimenting with this in a game theoretic sense. They’ve been able to show that “no matter what preferences they have, the behavior of the machine ends up being beneficial to the humans.” He explains the role of theory in experimentation to create beneficial AI. David comments that might require something like moral norms. Stuart explains why they have moral norms in a sense, because “their goal is precisely the benefit of humanity.”
25:12 — Terry brings up the practical problem of aligning political decision making processes with real world problems that those processes need to address. “I’ve become relatively cynical about political top down solutions, whether getting corporations to work together, nations to work together, different religious groups to work together and so on.” He’s explains why the amplification of AI can make this problem even harder to get a grip on. He wonders about “the fact that we’ve now got a global communication system set up in which AI could play a role in determining who talks to whom and what information gets passed on and what doesn’t.” He asks if “that also a recipe for a user problem, a control problem, or is there actually another way around it, that is a way around the political impasses.”
27:48 — Stuart comments that though AI systems are controlling a lot of the communications connections, they don’t even know humans exist, so can’t help at this point in resolving those political impasses. He is cautiously optimistic on cooperating on the control of AI problem, and somewhat reassured that even Xi Jinping is talking about their existential threat. Stuart and Terry have an exchange about China and the control problem.
30:34 — David returns to the behaviorist theme to report on the current generation of behaviorism. He describes the role of symbolic thought and values-based therapy, parts of acceptance and commitment training. He ends this section with the observation that this current generation of behaviorism may have something to contribute to the field of AI.
Artificial Intelligence in the Noosphere: Part Three Transcript
Part Three Notes
David asks about analogies between the noosphere and the human nervous system, particularly the autonomic and sympathetic vs. the conscious and cognitive functions.
0:21 — Terry explains that the autonomic nervous system maintains the metabolic functions in the background, below the level of consciousness. “In the evolution of nervous systems, there are nervous systems that were basically just that. So if you look in particular organisms that are radially symmetric, they mostly don’t have heads and tails and eye spots and mouths.” Animals that developed bilateral symmetry and sensory organs evolved head ends with brains. Because they moved in response to external stimuli, they needed brains for predicting the future. Animal nervous systems are broken up into these maintenance and predictive parts. AI today is largely the maintenance/autonomic kind.
2:09 — Terry explores the question of whether AI is developing a predictive function, and the analogy to a global brain. He wonders to what extent we’ll be able to rely on AI to carry out those autonomic functions of a global society—so “we don’t have to think about it, it’s just handed off. And we have to have then this other part of the system, which is the look ahead, the experiment, the trial and the error that we’ve been talking about, that’s what brains are mostly about.”
3:51 — David comments that we do have predictive systems that exist to a degree. He thinks the way Terry juxtaposes the biological and human technological realm is apt. He asks Stuart about this.
4:33 — Stuart observes that all these questions could be asked without technology. “You could look at the whole human social and economic system and ask, how do we draw analogies to the autonomic nervous system or the sympathetic nervous system. And there are lots of parts of a system that do sort of work like the metabolic system, they just function in the background.” He describes how that has worked reasonably well for a long time in human society, in the absence of anything like AI.
6:32 — Stuart describes how AI changes that picture. He mentions the “flash crash”. He discusses employment market, and describes an example where Amazon AIs discriminated against women in technical positions.
8:51 — David asks if that Amazon case was a feature or a bug. Stuart explains it was unintentional and a disaster in different ways. He describes other sectors where AI is increasingly taking over, and that “we could start to see large sectors of the economy where it’s actually AI algorithms that are deciding how that sector evolves, its structure, the flows of money and goods, and so on. And just like the employment market, that could go wrong in ways that we might regret.”
10:48 — Stuart discusses the effects of AI in social networks, and also an analogy to human gene editing for offspring, and the potential consequences of removing the random variation element of reproduction.
12:40 — David remarks that the conversation has highlighted “the fact that the Noosphere, the idea of some global thinking envelope, is not going to self-organize. I mean, it’s extremely challenging to actually accomplish something like that… it’s something that we have to construct and it’ll be extraordinarily difficult to do so. It’s going to take the very, very best of our knowledge to do so.”
13:34 — Terry brings up the subject of whether we should have a moratorium on creating conscious AI. A concern is that “when we think about thought and cognition and human experience, we’re dealing with feeling, we’re dealing with the possibility to suffer…or to feel joy.” He suggests that “especially in the context of the autonomic versus the predictive nervous system…the autonomic nervous system is a feeling nervous system, but it doesn’t give that information…until there’s a problem. And the intelligence, when we talk about AI, we’re talking about intelligence that doesn’t feel, to some extent, that there’s nobody home.” This is in unlike the common science fiction conception of AI, which has its own desires and sentience.
15:34 — Terry explains there are reasons to be concerned about either conception of AI. “The AI that has desires and fears and angers is scary, but also the AI that’s just a machine is scary. And I think that neither of them feel comfortable, and I think this is one of the discomforts about the future.”
16:31 — Stuart adds that because it’s hard to define what consciousness is, it’s hard to know whether we’ve created conscious machines. Consciousness isn’t his real concern. If we think an AI may be conscious, “the extra concern is that it then maybe it should have moral rights. And it goes from zero weight in our calculations of the future, to having a weight that might actually weigh against our own interests in the future, and that’s something we should be concerned about.”
18:39 — In the final part of their discussion, David suggests that as AIs become more like humans we’ll attribute consciousness to them, and have feelings toward them. Stuart says this then is something we can regulate—“don’t build machines that engender those kinds of responses in humans. So in particular, do not make them humanoid in form because that can bypass a whole lot of rational analysis.” Terry ends by saying that “it’s human gullibility that’s the issue, and I think of the Turing test as a human gullibility test, that is, we build a machine that can now fool us. I think that turns out probably to be easier than we think.