Why AI Geniuses Haven’t Created True Thinking Machines – Walter Bradley Center for Natural and Artificial Intelligence

As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. Therefore, everything is computable.

That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, AI is a system built on the foundations of computer logic, and when Silicon Valleys AI theorists push the logic of their case to a singularity, they defy the most crucial findings of twentieth-century mathematics and computer science.

Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (18391914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic). For example, you see a row of eggs in a carton and think 12. You connect the objects (eggs) with a symbol, 12.

In Peirces terms, you are the interpretant, the one for whom the symbol 12 means something. But eggs are not 12. 12 is not eggs. Your interpretation is the third factor that makes 12 mean something with respect to the eggs.

Gilder reminds us that, in such a case, the map is not the territory (p. 37) Just as 12 is not the eggs, a map of California is not California. To mean anything at all, the map must be read by an interpreter. AI supremacy assumes that the machines map can somehow be big enough to stand in for the reality of California and eliminate the need for an interpreter.

The problem, he says, is that the map is not and never can be reality. There is always a gap:

Denying the interpretant does not remove the gap. It remains intractably present. If the inexorable uncertainty, complexity, and information overflows of the gap are not consciously recognized and transcended, the gap fills up with noise. Congesting the gap are surreptitious assumptions, ideology, bias, manipulation, and static. AI triumphalism allows it to sink into a chaos of constantly changing but insidiously tacit interpretations.

Ultimately AI assumes a single interpretant created by machine learning as it processes ever more zettabytes of data and converges on a single interpretation. This interpretation is always of a rearview mirror. Artificial intelligence is based on an unfathomably complex and voluminous look at the past. But this look is always a compound of slightly wrong measurements, thus multiplying its errors through the cosmos. In the real world, by contrast, where interpretation is decentralized among many individual mindseach person interpreting each symbolmistakes are limited, subject to ongoing checks and balances, rather than being inexorably perpetuated onward.

Does this limitation make a difference in practice? It helps account for the ongoing failure of Big Data to provide consistently meaningful correlations in science, medicine, or economics research. Economics professor Gary Smith puts the problem this way:

Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may learn (if thats the right word) that

Stock prices can be predicted from Google searches for the word debt.

Stock prices can be predicted from the number of Twitter tweets that use calm words.

An unborn babys sex can be predicted by the amount of breakfast cereal the mother eats.

Bitcoin prices can be predicted from stock returns in the paperboard-containers-and-boxes industry.

Interest rates can be predicted from Trump tweets containing the words billion and great.

If the significance of those patterns makes no sense to you, its not because you are not as smart as the Big Data machine. Those patterns shouldnt make any sense to you. Theres no sense in them because they are meaningless.

Smith, author with Jay Cordes of The Phantom Pattern Problem (Oxford, 2020), explains that these phantom patterns are a natural occurrence within the huge amounts of data that big computers crunch:

even random data contain patterns. Thus the patterns that AI algorithms discover may well be meaningless. Our seduction by patterns underlies the publication of nonsense in good peer-reviewed journals.

Yes, such meaningless findings from Big Data do creep into science and medicine journals. Thats partly a function of thinking that a big computer can do our thinking for us even though it cant recognize the meaning of patterns. Its what happens when there is no interpreter.

Ah, butso we are toldquantum computers will evolve so as to save the dream of true thinking machines. Gilder has thought about that one too. In fact, hes been thinking about it since 1989 when he published Microcosm: The Quantum Era in Economics and Technology.

Its true that, in the unimaginably tiny quantum world, electrons can do things we cant:

A long-ago thought experiment of Einsteins showed that once any two photonsor other quantum entitiesinteract, they remain in each others influence no matter how far they travel across the universe (as long as they do not interact with something else). Schrdinger christened this entanglement: The spinor other quantum attributeof one behaves as if it reacts to what happens to the other, even when the two are impossibly remote.

But, he says, its also true that continuously observing a quantum system will immobilize it (the quantum Zeno effect). As John Wheeler reminded us, we live in a participatory universe where the observer (Peirces interpretant) is critical. So quantum computers, however cool they sound, still play by rules where the interpreter matters.

In any event, at the quantum scale, we are trying to measure atoms and electrons using instruments composed of atoms and electrons (p. 41). That is self-referential and introduces uncertainty into everything: With quantum computing, you still face the problem of creating an analog machine that does not accumulate errors as it processes its data (p. 42). Now we are back where we started: Making the picture within the machine much bigger and more detailed will not make it identical to the reality it is supposed to interpret correctly.

And remember, we still have no idea how to make the Ultimate Smart Machine conscious because we dont know what consciousness is. We do know one thing for sure now: If Peirce is right, we could turn most of the known universe into processors and still not produce an interpreter (the consciousness that understands meaning).

Robert J. Marks points out that human creativity is non-algorithmic and therefore uncomputable. From which Gilder concludes, The test of the new global ganglia of computers and cables, worldwide webs of glass and light and air, is how readily they take advantage of unexpected contributions from free human minds in all their creativity and diversity. These high-entropy phenomena cannot even be readily measured by the metrics of computer science (p. 46).

Its not clear to Gilder that the AI geniuses of Silicon Valley are taking this in. The next Big Fix is always just around the corner and the Big Hype is always at hand.

Meanwhile, the rest of us can ponder an idea from technology philosopher George Dyson, Complex networksof molecules, people or ideasconstitute their own simplest behavioral descriptions. (p. 53) He was explaining why analog quantum computers would work better than digital ones. But, considered carefully, his idea also means that you are ultimately the best definition of you. And thats not something that a Big Fix can just get around.

Heres the earlier article: Why AI geniuses think they can create true thinking machines. Early on, it seemed like a string of unbroken successes In Gaming AI, George Gilder recounts the dizzying achievements that stoked the ambitionand the hidden fatal flaw.

Read more:
Why AI Geniuses Haven't Created True Thinking Machines - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts
This entry was posted in $1$s. Bookmark the permalink.