Disclaimer

Many of my essays are quite old. They were, in effect, written by a person who no longer exists in that my views, beliefs, and overall philosophy have grown and evolved over the years. Consequently, if I were to write on the same topics again, the resulting essays might differ significantly from their current versions. Rather than edit my essays to remain contemporary with my views, I have chosen to preserve them as a record of my past inclinations and writing style. Thank you for understanding.

October 2008

How Smart Can an Apple IIe Be?

Computers are super-swell - algorithms, not so much

Brief Description

This essay presents two theories. First, that once we decipher the algorithms at work in animal brains, we will have the bizarre realization that we could have implemented artificial intelligence much sooner on historically older computer hardware. Second, that common estimates of brain-to-processor equivalency are too high, i.e., that intelligence isn't as complicated as brain size and complexity suggests.

Full Description

Sections:

Strolling Down Memory Lane

When I was a kid I had this idea for how to create a general conversational artificial intelligence, the kind made famous by the Turing Test. Figured to me, all you really need to do is let the system look up the definition of any word in some self-referential database, defined as follows. When presented with a sentence and the challenge of understanding the sentence, the system would look up each word in the database. Needless to say, the definition of one word would be written using other words, so the system would cascade down a level and look up the definitions of all the words used in the definition of the first word, thus its self-referentiality. Each of those definitions will have some more words, and so the cascade continues...but I reasoned that the cascade must eventually terminate, at which point the system would perfectly understand the definition of the original word from the sentence. Repeat this process for every word in the sentence and the system ought to be able to understand any sentence you present to it.

In retrospect, it seems clear that this database is simply a fully hyper-linked dictionary. Of course, not many people were talking about hyperlinks in the early eighties, which is when I had this idea at around the age of seven or eight. Obviously, my childhood idea contained more than a few flaws. Not only is a dictionary on a bookshelf not an AI, but a computerized dictionary solid with word-to-definition hyperlinks and hooked into a program that chomps words off a sentence one at a time, isn't an AI either. Something was wrong (inherent circularity in the cascade being one major problem, not to mention a complete ignorance of grammar, social contextual cues, i.e., slang, or what is often tossed up as "common sense" knowledge, meaning, we have no idea to define that kind of knowledge). Hey, I was only eight, it was fair shot, give me a break!

Current Computers, Artificial Intelligence, How's All That Working Out?

About the same time, around the ages of seven to ten, I started programming the family Apple IIe. I cannot begin to describe the nostalgic feelings that sweep over me when I recall those experiences. I know, I'm a geek, get over it. I loved programming. I was a kid, I was horrible at it, I used BASIC for God's sake and didn't even really grok the "gosub" command very well, but I truly loved the experience. I invoke my beloved Apple IIe because it often comes to mind when I contemplate the point of this essay: that once we "solve" artificial intelligence, we will discover to our unfathomable humiliation that we could have implemented it decades earlier on stupefyingly weak computers, had we only possessed the proper algorithm. In other words, I am a strong adherent to the argument that AI is primarily a software problem, not a hardware problem. Imagine how weird it would be in, say, 2030, to dig an old Apple IIe out of a museum and implement an impressively capable AI on it (by the IIe's standards) using algorithmic discoveries then available?

Seems to me -- and I will demonstrate it here -- that Moore's Law has brought us to the point where AI is in large part a software problem. That is, we have some really powerful computers lying around but I'm not seeing the level of intelligence that power ought to enable. Let's try to put some numbers on this briefly. Scientists and philosophers have many different methods of mapping between the computational performance of conventional computers and brains found in nature. For example, estimate the power of a single neuron and multiply by the number of neurons in a given brain, or estimate the power of a single synapse and multiply by the number of synapses, or some other heuristic along those lines. These different methods yield a relatively wide range of derived capability, but there is nevertheless a general consensus that contemporary desktop computers ought to be well past the stereotyped insect level by now. For example, I am writing this essay on a Mac with two dual-core Intel Xeon processors running at 2.66 GHz. It is distressingly difficult to put hard numbers on the computational capabilities of processors, but this machine probably pumps about 10.64 gigaflops at the upper-end (loosely: one double-precision flop per core per cycle...I admit, a very round estimate). Assuming a machine can theoretically pipeline one flop per cycle, then flops and IPS are equivalent (anything within an order of magnitude is functionally equivalent from my point of view, don't bother me with factors of two or four, I don't care.). I will therefore use MIPS for the remainder of the discussion on the assumption that flops and MIPS are effectively equivalent. Therefore, I declare desktop computers in 2008 to be worth about 10,000 MIPS.

How many MIPS is an insect worth? [Side-note: some calculations and plots on this topic use insects as the example of the day, others use arachnids. Let's dispense with the distinction for the remainder of the discussion.] Moravec's plots estimate spider brilliance at between ten and 100 MIPS. We'll go with 100 to be conservative. Bostrom believes Moravec might be low by a full factor of 1000 for human brains. Let's assume he believes Moravec is similarly low for other animals. Therefore, we can assume that Bostrom would grant an spider 1000 times Moravec's estimate, or 100,000 MIPS (maybe Bostrom's 1000 applied to Moravec's ten however, which would yield 10,000 MIPS for Bostrom, I'm not sure). Kurzweil seems to approximately agree with Moravec, but looking at his plots seems to settle a little higher, somewhere around 1000 MIPS.

I am calling my computer 10,000 MIPS at the high end, as stated above. Moravec says insects are worth 100, Bostrom says 100,000, Kurzweil says 1000. So, by Moravec's standard my computer has 100 times that capability, by Bostrom's it has between one tenth and one-to-one capability, and by Kurzweil's it has ten times spider smartiness. By some of these estimates desktop computers were giving insects a run for their money almost ten years ago. Even if that was optimistic, the numbers really do suggest that we're there now (Bostrom may not quite agree but even he ought to think that a relatively expensive computer today has got it made), so our desktop computers are basically insects if not a solid order of magnitude better. We have the calculations of several theorists on the subject suggesting such a conclusion.

Insects Galore

So my point is, we seem to have at least achieved, if not thoroughly surpassed, insect computational ability in 2008, but there's a problem. Computers do a lot of impressively intelligent things, but one thing they don't do very well is act like living things. I generally follow the research on vision algorithms, neural nets, Bayes nets, language parsers, voice recognition, SLAM robotics, self-driving vehicles, ontologies, not to mention my true loves: collective robotics, swarm intelligence, evolutionary algorithms, etc. -- even my own PhD thesis covered some topics directly relavant to classical AI such as intelligent and heuristic tree-search optimizations and computational topology...the list of fields in which AI research is progressing is quite extensive. I am impressed by accomplishments in all of these fields, but my computer still can't duplicate the behavior of an insect. At least, I don't see it doing that. Insects live in extremely complicated environments. Many can fly which means they are navigating a dynamic enemy-laiden three-dimensional world of visually confusing clutter at aeronautical speeds. They find, catch, and eat food. They find, catch (and often eat) mates. They build complex physical structures (it might be more genetically "hard-wired" than such behavior exhibited by vertebrates, but it's in there somewhere!). These little buggers (sorry) are doing some impressively complicated things, and my computer just isn't doing things like this.

One counter-argument is that we don't have versatile robotic bodies yet. How can a desktop computer even pretend to be an insect if it has almost no way of demonstrating its intelligence by interacting with the world? Put differently, how would know if a computer is intelligent if all it does is sit there? Would a stream of text describing what it's like to be an insect sell us on its insect intellect? "Now am I buzzing, now I am hungry, now I am transmitting malaria, I'm such a clever insect-level intelligence. Marvel in awe at my description of myself."...if you see my meaning.

Any discussion on what we might be able to do if we had animalistic robotic bodies to put our computers in must be highly speculative -- we can't run the experiment since we don't have the robots -- but my hunch is that if someone dropped a robotic housefly on the desks of the cutting edge AI researchers with a 2008 processor in it and said, "Okay, program it", we wouldn't be able to do it. Not quite, not yet, not today. Remember, a robotic housefly as I am suggesting is far far more complicated than current robots, leagues beyond the current Mars rovers, the DARPA grand challenge vehicles, Pleo, or anything wandering the halls of computer science and electrical engineering departments around the world. Insects have literally tens of thousands of detailed pressure, temperature, and chemical sensors all over their bodies, they have sophisticated vision, hearing, and smell. They also have kinematic and other internal senses describing the health of each of their internal organs and other internal structures. They sense hunger, fatigue, and overall physical state, not to mention monitoring numerous chemical and nutrient levels. They have fabulously dexterous and fast motor capabilities with incredibly adaptable limbs that are resilient to damage and a wide range of constantly changing environmental conditions. The hypothetical perfect robotic housefly I am describing would have all of these physical attributes, it is nothing like existing robots, not even the most advanced robots currently constructed. Could we program such a thing? I am not remotely convinced that we would have a clue how to program insect level intelligence into such a robot...and the point is, we already have the 2008 processor to do it, which by argument ought to be up to the number-crunching challenges. We simply don't know the algorithm yet.

However, even without robotic bodies, I still think the challenge stands. I believe a computer programmed to possess intelligence could demonstrate that intelligence through its interactions with people, with the internet, and with other intelligent computers. Alternatively, it could demonstrate its intelligence through the portrayal of a simulated insect in a virtual world, such that it exhibits the complexity and versatility of real insects. One way or the other, I seriously doubt that what's going on is that my computer actually is as intelligent as an insect but just can't show me because it doesn't have a body. No, the thing is not that smart, period.

I am not attempting to disparage AI and ALife research. I recognize and applaud the steady solid strides we are making, I am certain we will figure this out, and reasonably soon too given the remarkable strides we are making in studying how brains work. I am simply observing there is an opportunity to reflect on some serious weirdness from the disparity between current processor capabilities and our current understanding of intelligence. This is where the essay wraps back to the title. Once we discover the algorithmic solutions to intelligence, I believe we are going to have some surreal realizations, namely that we could have created incredibly intelligent machines long before we actually do so, for the reason that the processing hardware will have existed at some point in the past, but the software didn't.

Time Travelers from the Future Know Everything

Imagine the following scenario. It is 2008 and our computers can't act like insects yet. Suddenly, in a flash of temporal paradoxicity, a time-traveler from thirty years hence appears in our midst, promptly sits down and programs our computer to act like an insect. We stand there in disbelief watching our 2008 computer act like a fully living organism and think to ourselves, "Well heck, I should have been able to do that too, it's the same damn computer after all, same keyboard, same mouse. Why couldn't I type the same program?!"

Q.E.D. That would be weird.

Just How Smart is an Insect Anyway

I am curious how far this retroactive algorithmic knowledge could apply. How far back could we go and grant various levels of intelligence to computers of unbelievably mediocre capability? In other words, how smart can an Apple IIe truly be, if properly programmed? What is its true potential if imbued with the correct algorithm? That simple exploratory day-dream is really the only point of this long-winded essay.

One response is to reapply the math from above, which argues that we only recently achieved the necessary computational capacity for insect level intelligence. By that logic, the answer to my question about how smart an Apple IIe can be (1 MIPS or so) is an unequivocal "not much buddy!"...but I believe the estimates by Moravec and others of brain to computer equivalence may be biased extremely high. I believe the necessary computation to replicate a given observed level of animal intelligence may be far lower than suggested by the calculations commonly derived from that organism's brain. While Moravec and Kurzweil feel that insect level intelligence requires 100 to 1000 MIPS and Bostrom believes it may require as much as 100,000, I believe we may discover that a fraction of that is necessary, orders of magnitude less perhaps, although I shudder to put actual numbers on my prediction because I have no precise equation for performing such a calculation. Nevertheless, I do make that general claim here. How can I defend such an assertion? Why do I disagree with the likes of Moravec, Bostrom, and Kurzweil on this issue?

Evolution is Intolerably Stupid Sometimes

Evolution is simultaneously one the most brilliant and most stupid forces of nature. The dichotomy is nothing short of stunning. Evolution is excellent at optimizing systems given the current situation (its brilliance), but cannot correct early locked-in bad choices (stupid). When evolution wanders onto a path, it often remains stuck on that path forever because jumping to another (perhaps more efficient) path would require changes that natural selection cannot select for because the temporary path connecting the original to the latter is detrimental to the organism's survival. In fitness landscape terminology this is the phenomenon of finding a local maxima while missing a far superior global maxima. For example, the human retina is completely backwards. I don't just mean conceptually backwards, I meant the retina is physically constructed backwards! There is absolutely no excuse for this from a forward-thinking engineering perspective. Any engineering student doing such a thing would rightly deserve a failing grade from their professor (points for thinking outside the box perhaps). Evolution had no choice though. Long long ago, the progenitors of our ancestral line began to evolve eyes and from the outset the retina got set up backwards...and that's all she wrote folks. From that point forward we were screwed in at least three ways: since light must go through the nerves before hitting the photoreceptors we lose some photons due to imperfect transparency of the nerves (so we can't see as well in the dark), we lose some acuity do to scattering (so we can't see as sharply) and we have a blind spot in which we literally can't see a darn thing without compensation by the other eye and intuitive "fill-in-the-blank" neural processing (so we're partially blind, crikey!). All evolution could do was optimize the unforgivable system initially made available to it. Turning the retina around the right way in a single generational mutation is impossible, and the retina cannot be slowly turned around over many generations because a sideways-facing retina is useless (that savanna between maxima on the fitness landscape), so we're stuck with these stupid eyes now and there's nothing evolution can do about it. The retina is but one common example of evolution getting stuck on an upward slope toward a suboptimal local maxima and missing a far better global maxima for the rest of eternity of that particular ancestral lineage. Cephalopods -- squid, octopuses, and cuttlefish -- got it right, God bless their souls, because they evolved vision completely independently of vertebrates, and got their retinas started out correctly early on.

I am virtually certain that the organization of animal brains suffers from countless local maxima. Consequently, I strongly believe that brains are highly suboptimal. Don't get me wrong, I believe they are highly locally optimal. In other words, brains perform their algorithms very very well, but they might be using suboptimal algorithms! I believe that once we decipher the processing algorithms at work in the brain we will discover that many of the functions being calculated could have been performed much more efficiently with a completely different neural configuration, or alternatively, could have been replaced with entirely different functions that achieve the same result more efficiently. Thus, I postulate that estimates of the computational equivalence of brains based on neural or synaptic analogies to processors -- estimates which do not take into account the likely inefficiencies of brain organization -- are skewed by a high bias.

Conclusion

Thus, the following is my prediction: that as we decipher various processes of the brain, we will discover than many of those processes could have been designed (with forethought and planning) to operate with much lower computational demands. Corollary, we will discover, much to our embarrassment, that we could have created intelligent machines much earlier in history, on the argument that the requisite processing capability was available long ago but we simply didn't know the right algorithms yet.

I still have my old Apple IIe in a box in the basement. I wonder how smart it can truly be.

I would really like to hear what people think of this. If you prefer private feedback, you can email me at kwiley@keithwiley.com. Alternatively, the following form and comment section is available.

Comments

Name:
Comment: characters left

(Html tags will be intentionally stripped for security reasons, sorry.)
Verification: = (solve the equation, don't just duplicate the text)

Name:Anonymous Date/Time:2012/03/05 18:41:01 GMT
You just blew my 500 MIPS mind.