AI is Religion Not Science

bigstock-Mechanism-1026141A few weeks ago, I posted an explanation of why it’s highly unlikely that that anybody will anytime soon replicate the human brain on a digital computer. In that post, I explained that a single human brain is an analog computer that’s enormously more complex than any digital computer or combination of digital computers.

In this post, I’ll explain why AI is essentially a set of religious belief rather scientific concepts.

AI as “Intelligent Design”

Wikipedia defines “intelligent design” as “the proposition that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”

The entire concept of “intelligent design” is based upon the concept of “irreducible complexity”–that certain biological structures are so complex that they could not have evolved naturally.

No reputable scientist believes that “intelligent design” is science, because pawning the complexity of biological structures onto a designer simply raises the question of who designed the designer.

More importantly, as Richard Dawkins explains in The God Delusion, “far from pointing to a designer, the illusion of design in the living world is explained with far greater economy and with devastating elegance by Darwinian natural selection.”

In other words, natural selection is not some kind of “second best” way to create biological complexity, where the presence of a designer would make things easier or more likely.

Instead, natural selection is not just the only way that something as complex as the human brain has ever come into existence, it is also the most efficient and elegant way for something as complex as the human brain to come into existence.

The notion that something as complex as the human brain could be designed using components entirely different from the components of the human brain falls into the same fallacy as “intelligent design.”  It’s assuming that intentional design can “do the job” faster and better than natural selection.

AI Through “Natural Selection”

Some AI theorists believe that it may be possible to evolve AI from a seminal “singularity” in a process that would be similar to the way that organic life evolved.  However, that concept is simply restating the “intelligent design” concept in another form.

The human brain evolved from single-celled organisms over billions of years in response to an environment that was infinitely more complex, in total, than anything within in, including the human brain.

So unless you’re planning to have your “singularity” take billions of years to evolve, you’d need to replicate the environment that led natural selection to create the human brain and “speed up the cycles.”

However, designing a model of the environment is the problem of designing the human brain but infinitely more complex.  Anything that you could “design” would be absurdly simple compared to the real world.

Rather than solving the problem of “intelligent design” as a vehicle to create AI, the “let’s evolve AI” simply makes the same mistake. Rather than designing the brain, you’re now trying to design the environment that created the brain, which is even more unlikely.

AI Through “Mind Transference”

Finally, there’s the concept, currently being popularized by Ray Kurzweil, that people will be able to transfer their minds from their organic, analog brains into mechanical, digital brains.

The idea, of course, is that the mind is like a software program that can be loaded and replicated from one computer to another computer.  However, that idea that “the mind” exists independently of “the brain” is a religious concept rather than a scientific one.

The entire concept of “life after death” assumes a mind/brain dichotomy.  And while that dichotomy shows up frequently in genre fiction in the form of ghosts, mind-transference, and so forth, it is not a scientific concept.

The scientific consensus is that the mind is not software running on your brain, but rather that your mind and your brain are the same thing.  When your brain stops working, your mind ceases to exist. In other words, there is no “software” to transfer.

Even if you were somehow able to create an exact copy of your brain, the result would no more be “you” than two identically-manufactured automobiles are the same automobile.

But with AI we’re not even talking about an exact copy. Instead, we’re talking about emulating an insanely complex analog brain (which evolved through natural selection) on a comparatively simple digital computer (which was designed by a human being).

So, once again, you’re back in the land of “intelligent design.”

AI as “Inevitable in 20 Years”

The final way that AI is religious rather than scientific is in its prophetic character.  AI proponents keep predicting that the “singularity” will be achieved two or three decades in the future, indeed that such a breakthrough is “inevitable.”

Scientists–real ones, at least–do not generally make time-based predictions that are dependent upon breakthroughs that haven’t yet taken place. Scientists either base their predictions on the likely outcome of research in progress, or they discuss possibilities without stating a time-frame.

For example, a scientist might predict that a certain type of gene therapy is likely to be possible within ten years, since research into genetics and DNA has been progressing at a rapid pace.

However, while a scientist might speculate that faster-than-light space travel is possible, he or she would never predict that it will be a reality within a certain number of decades.

AI proponents, however, do exactly that, even though there have been no breakthroughs in the field for decades. Even the most sophisticated AI programs are just refinement of the same basic algorithms that have been in place since the 1970s.

However, that hasn’t stopped AI proponents from regularly predicting that machines that can think like humans are only a decade or two away.  Here’s a quick summary:

  • 1950: Alan Turing predicts that computers will pass the Turing Test by “the end of the century.”
  • 1970: A Life Magazine article entited “Meet Shakey, The First Electronic Person” quotes several distinguished computer scientists saying that within three to fifteen years “we will have a machine with the general intelligence of a human being.”
  • 1983: Authors Edward Feigenbaum and Pamela McCorduck in The Fifth Generation predict that Japan will create intelligent machines within ten years.
  • 2002: Handspring co-founder Jeff Hawkins predicts that AI will be a “huge industry” by 2020 and MIT scientist Rodney Brook predicts machines will have “emotions, desires, fears, loves, and pride” by 2022.
  • 2005: Ray Kurzweil predicts that “mind uploading” will become successful and perfected by the end of the 2030 decade.

Do you notice how the threshold keeps getting pushed out as the promised breakthrough never seems to happen?

If that seems familiar, it may be because that’s the way that some fundamentalist churches (like the Jehovah’s Witnesses) behave when they predict the end of the world.  They set a date and then, when the date gets close (or passes), they simply move the date farther into the future.

So, next time you talk with somebody who is “absolutely convinced” that “The Singularity is Near,” listen to the tonality.  If you’re sensitive to these things, you’ll hear the voice of somebody for whom faith carries more weight than facts.

Please take a moment to support Amazing Stories with a one-time or recurring donation via Patreon. We rely on donations to keep the site going, and we need your financial support to continue quality coverage of the science fiction, fantasy, and horror genres as well as supply free stories weekly for your reading pleasure. https://www.patreon.com/amazingstoriesmag

Previous Article

A Blog Horde Interview with C.E. Martin

Next Article

Forgotten Books

You might be interested in …

28 Comments

  1. I love the way you think, Geoffrey. I do believe we will get so-called AIs with enough processing power that most people will be content to treat them as if they were human. My recent post about Japanese robotics argued that that day is coming quite soon.

    "However, that idea that “the mind” exists independently of “the brain” is a religious concept rather than a scientific one."

    I would point out that it's actually a Protestant concept. For Catholics, body and soul are inseparable: "The unity of soul and body is so profound that one has to consider the soul to be the "form" of the body: i.e., it is because of its spiritual soul that the body made of matter becomes a living, human body; spirit and matter, in man, are not two natures united, but rather their union forms a single nature." https://www.vatican.va/archive/ccc_css/archive/cat

  2. So what you're saying, Geoffrey, is that there will never be any more progress in science or engineering, right?

    (You see why your responses are annoying and idiotic, I trust? I'll be ignoring your opinions as uninformed and unrealistic from now on. You lose.)

    1. What I'm saying is that there may be practical limits to the complexity of computer program design due to limitations in the ability of a single human mind to comprehend said complexity and the limitations of the effectiveness of applying multiple people to a design project in order to cope with that complexity.

      I'm basing this on observation of the actual behavior and capabilities of operating system development groups and using that as an analogy for the complexity of attempting to redesign–or even completely understand the existing design of–what may very well be a 100 billion node analog computer.

      As for your characterizations of my attempts to understand your viewpoints by extending them to their logical conclusion, I can't possibly take responsibility for your inability to express yourself clearly enough for me to figure out exactly what you're trying to say.

      I note that you entered the discussion with the accusation that the title of the post was essentially trolling for comments rather than an accurate representation of the contents and then, when I rather casually pointed out the weakness in your arguments, you started name calling.

      None of this behavior convinces that you're in command of your arguments or capable of advancing them in a comprehensible manner.

  3. Geoffrey another enticing article. Unfortunately AI can be one of those hot button topics. The term means different things to different people. We could be talking about pure brute force thinking or consciousness. The "mind" may be different from the brain. Replicating what is in the human mind may prove impossible. We may not be able to build a bird, but we can build a flying machine. Similarly we may not be able to build the human mind, but we can build a thinking machine. We can intelligently design machines that make decisions based on input, they just might not care about it one way or another. Maybe I'm wrong, but I don't think we have figured out how to create something as simple as a feather. How then can we expect to create something soon as complex as a brain? But we can still fly without feathers.

    As a shameless plug I will mention my own exploration of the concept in my short story "The Geno Virus" available on Amazon.

    The trouble with predictions is when people start putting timelines on them. The world will end in 2012. Wait, I meant 2112. Buck Rogers TV show and their 198X launch of the last deep space probe.

    Kurzweil and others live on the audacity of their statements. They are like sports radio talk show hosts. They make enough predictions a few might come true. They remind you constantly of the ones that did and erase the tape of the ones that did not. I believe it strictly to be a volume business. If x percentage comes true, then the larger the sample size the greater x will represent. The more shocking he can become the more press he can get for those x percent that came true. BTW: Didn't Al Gore invent the internet?

    I look forward to your next post.

    RKT

      1. Mike,

        I agree with your posts and value your opinions. Sometimes people take positions simply for the sake of arguements sake without fully being invested in them. The art of debate. Sometimes the best posts are those that make the most outlandish claims in order to spark debate and cause people to think about a subject.

        If we all take the same position, then we really can't fully explore a topic.

        I always appreciate your input and passion. It is always well thought out.

  4. Interesting topic.

    You write that the fallacy of Intelligent Design (ID) is "assuming that intentional design can “do the job” faster and better than natural selection."

    You misstate the fallacy. The ID fallacy is that "natural selection cannot do the job" thus trying to prove the existence of an intentional designer.

    With this correction, the article's parallels between AI and ID fall apart.

    1. I agree that is how ID people generally state their viewpoint.

      However, the deeper fallacy behind ID is not the belief that such systems cannot have happened through natural selection but rather missing the point that such systems could ONLY have happened through natural selection.

      Let me try again. Let's suppose that an ID guy tells you that "natural selection cannot do the job and therefore there must have been a designer."

      There are two responses:

      1. Yes, natural selection can do the job, but I agree that, if an intelligent designer existed, he/she/it could have done it as well and probably better.

      2. Yes, natural selection can do the job, and in fact natural selection–so far as we can tell based upon existing evidence–is the ONLY way that such complexity could possibly emerge.

      You're taking viewpoint one but substituting "scientists" for "God."

      I'm taking viewpoint two, which I believe is supported by the evidence.

      Right now, we can't create a single-celled organism from scratch. But we're going to create an accurate and usable replicate of the human brain by 2030? And transfer our (non-existent) mind "software" into it?

      Do you seriously believe this?

      1. No, I never said I believe we'll have artificial brains in 20 years. It'll take much longer.

        Evidence that a brain can NOT be built, only evolved? Umm… sorry, but I find this claim so bizarre, I'm at a loss for words. Any such "evidence" would be rapidly shot down.

        1. I think we're dealing with probabilities rather than impossibilities. Where's the evidence that we're anywhere near duplicating the human brain?

          Functionally, what passes as AI on digital computers today isn't even close. In neuroscience, we're only beginning to understand the structure of the human brain and then only at the grossest level. The relationship between neurons and how they interact with one another remains opaque. In biology, as I pointed out, we're not even capable of creating a viable single-celled organism from scratch.

          My post points out that Kurzweil and his "mind-transference by 2030" fanboys (many of whom are in the media) are motivated by an essentially religious belief (i.e. life after death) rather than any realistic assessment of what's scientifically likely.

          At this point, natural selection is the only method that has proven to create a human brain, not to mention billions of other incredibly complex life-forms. What has humanity ever constructed that's anywhere near to the insane complexity of the natural environment and organic life?

          Consider: if I'm right that neurons act as potentiometers rather than bits, a single human brain would be many orders of magnitude more complex than all the digital computers in the world combined.

          Just ask any circuit designer what an analog circuit with 100 billion components would look like or, better, how long it would take even a huge design team to design a circuit of that size that actually did something meaningful.

  5. Another way of looking at the neuron firing thing is that it can "fire" at any incremental step between 0 and 1 – 0, 0.0001, 0.0002, 0.0003……1.0; whether other neurons fire in response is dependent upon both the 'strength' of and the 'type' of signal received. A binary analog (heh) only responds to the 'strength'.

    Remember when audiophiles were complaining that CDs, due to their encoding methods, dropped highs and lows – reproduced a narrower band of sound than an LP? A binary neuronic network is to the brain what the CD is to the LP.

    But to get back to my faux self-awareness concept: Any autonomous "machine" requires a feedback loop so that it can monitor its own functionality, affect repairs, keep itself in working order and one that constantly updates its place, again, so that it can function effectively in its environment. Self-awareness might not be anything other than "the sum of our parts" – the return signals and the monitoring function that determines whether or not that feedback matches desired conditions or not (which are themselves hard-wired conditions).

  6. This is provocative link bait because AI of course exists and will continue to improve, so it's a definitional thing, in part.

    If the thesis is that human-level AI is likely harder and farther off than Kurzweil thinks, yes, okay, that's a position that can be argued and has been here. If it's that it's impossible, well, that's "religion" also and easily attacked by a strawman argument (as this is here at some level).

    I don't believe there's anything nature can do following the laws of physics that human intelligence can't duplicate or improve on with technology, given enough time. Maybe that's a religious belief, but I've not seen any evidence to the contrary and plenty evidence in support. Saying otherwise seems shortsighted and untenable.

    1. Interesting. From your comment you appear to have respect for the scientific method. That being so, how exactly would somebody prove or disprove your hypothesis that humans can duplicate or improve on everything in nature? Wouldn't this be a case of sorting all possibilities into 1) if it's happened it's happened and 2) if it hasn't happened, it will happen. That's a tautology rather than a hypothesis.

      I suspect you may be confusing the ability to understand something and the ability to imitate that something with the ability to duplicate it. For example, we understand how birds fly and we can build a working model of a bird that can fly. Duplicating the bird itself, though, in all its insane complexity, is far beyond our abilities. (Cloning doesn't count, because that's simply piggybacking on the natural "bird-building" process.)

      As for your statement that AI exists, that's true but only in the sense that the term is applied to some programming techniques. That those programming techniques are somehow developmental to duplicating human intelligence is a matter of faith that depends entirely upon a bizarrely limited view of what constitutes human intelligence.

      1. I'm a scientist, so I think I know something about science. I also understand that nature is just the operation of the laws of physics, which we also obey those and understand at some deep level and can exploit. In principle there's no reason, given our ability to manipulate individual atoms, that duplication is not in principle possible. I think you're confusing the difference between what's in theory possible and what's impractical or too complex given current technology and labeling it impossible. It's religion to call anything impossible, isn't it? Science never does that, nor does it prove anything. And as for tautologies and such philosophical nonsense, let's avoid semantic discussions.

        1. So let's see, you believe that human beings will eventually understand everything there is to know about the physical world and also be able to figure out how to duplicate anything within the physical world. In other words, science will eventually make human beings omniscient and omnipotent. Also, you view this belief as NOT religious in character?

          Please don't take offense. I'm just trying to make certain that I'm understanding you correctly.

          1. So let's see, you have to recast eveyone's comments into incorrect strawmen to attack rather than what they actually say? That makes you someone who argues in bad faith and infuriating to discuss an otherwise interesting topic.

            Given an inexhaustible examples of variations of a working machine (e.g. a brain) that doesn't appear to depend on any physics or chemistry not understood, nor made of materials not already in abundance, it's entirely reasonable to expect an engineering solution on some timescale. Engineering solutions are entirely possible.

            Geoffrey, if you want to stop acting like a total weenie, there could be a meeting of the minds. If you want to cling to your religious beliefs that science and technology are not subject to your arbitrary limitations (which you move around to suit your argument), well, we'll all just mark you off as a loon and stop talking with you.

            I even agree with you that strong AI by 2030 is extremely unlikely! With advanced technology that is currently conceivable without breaking any laws of physics, it's an engineering problem that is intrinsically solvable. I don't know the timescale, but you sure as heck are on shaky ground to call my technological optimism based on a long history of progress and without limits in sight a "religion." Tell me what else we can't do? And if you're going to move the goalposts on probability and possibility again, don't bother. You don't get to be on both sides and not get called out for it.

          2. Mike,

            I find your faith in the inexorable advance of progress in engineering to be almost as quaint, and about as profound, as your choice of schoolyard obloquy.

            As someone who has had more than decade of experience working in an operating system development group, I know from experience that there are real limitations to the ability of human beings to design complex systems.

            At a certain point of complexity–one that's reached rather quickly in computer environments–adding further complexity reduces stability and attempts to fix those stability problems in turn create more stability problems.

            There are also limitations to the amount of complexity that a single person can comprehend as well as limitations to the number of people who can be added to a project without reducing the ability of the team to actually deliver a finished system.

            This is true with digital computers as they exist today, even though they're many, many orders of magnitude less complex than would be an analog computer with 100 billion nodes.

            Put simply, it requires a fairly large leap of faith to believe that the same quality of thinking (i.e. the best minds in the field of computer programming) that's resulted today's sadly fragile computer environments would be able to cope with the level of complexity that we're talking about here.

  7. Before this post went live, Steve and I had a back-and-forth about the subject of AI. Since the dialog is pertinent to the subject matter, I'm posting it with his approval.

    FROM STEVE:

    You know, back in the early 80s, when I was at AT&T, my group designed an "edutainment" system called the Man*Chine Rally. It went into the (now closed) Infoquest Museum in NYC and into the educational center at Epcot. The purpose of the 'game' was to teach the basic principles of artificial intelligence and knowledge-based systems (state of the art nearly 40 years ago). Transformers was going through its first craze (the cartoon series) and we emulated that by creating a 'race' between the human player and the computer using transformer like vehicles. Each vehicle had differing capabilities in terms of speed, fuel consumption and the type of terrain(s) it could handle.

    The computer racer was restricted to a minimal set of rules (analyze terrain, calculate fuel usage, choose direction, etc) – If I remember correctly we had it down to 12 simple rules. Each player alternated in choosing 3 chassis for their bot, the race would begin with the human player using a touch screen to chart their route and a touch display of their 3 chassis to switch. A countdown timer paced the race, the map displayed icons of relative location(s) and a display showed remaining fuel.

    You could watch, in another display, the computer going through its analysis and it demonstrated how a very simple set of rules (which direction takes me closest to my goal? how much fuel will it cost to travel in that direction? etc) could result in seemingly complex behaviors. Plus it was fun. (Number 1 game at Epcot 2 years running as voted by the kids)

    Anyway. I designed the game and sussed out the rules, mostly by spending several weeks studying ants (yes, they use polarized light and pheromones, but suppose they didn't. How would they navigate?)

    Talked with a bunch of the leading experts at the time, read most of their papers (the field was really concentrating on knowledge-based systems at the time) and I came away with a few radical conclusions:

    1. we're kidding ourselves. There is no such thing as "intelligence". Our vaunted self-awareness is merely an artifact of a feedback loop.

    2. No one is ever going to create "human-like intelligence" in a machine so long as they try to make it "perfect" (they're actually just beginning to really discuss this aspect now)

    3. No one is ever going to create the AI of the movies so long as the underlying substructure is based on binary systems. Our "brain" is chemical-electrical analog and synapses firing is not just an "off or on" thing. We can't yet replicate true analog systems with our electronics.

    MY RESPONSE:

    I've actually traded emails with Ray Kurzweil about the analog vs. digital thing. His viewpoint is "I'm famous so I must be right."

    Also, he combines rather pedestrian predictions of the future with his wild ones then claims that he's been 89% right with his predictions.

    Even though it's the 11% that were important. (Example: "People will be reading books on screens rather than paper." Woo-hoo, that took a stretch of the imagination!)

    Knowledge based systems are still "state of the art" because there have been no advances since then.

    I have noted that some AI guys (like Brooks) have started "changing the bar" by talking about machines that fool people into thinking that they have emotions. Not really the same thing, of course.

    FROM STEVE:

    I'm actually quite fond of the "no such thing as actual intelligence" theory as it really explains so much. You really can accomplish quite a lot with just a handful of simple rules. If humans (and everything else) are actually "hard-wired", it reduces creativity to the realm of "combine things that haven't been combined before" or "place the emphasis on a different syllable". Theologically it means we're not responsible for ANYTHING! (recent brain research found the 'god' area of the brain. If my theory is correct, the center of so-called self-awareness will be linked to this area and probably not too far away; they might in fact be aspects of the same region).

    But then you have to ask the meta-question: if we're hard-wired and not really intelligent or self-aware, what does it really matter? How does it affect anything?

    I like Kurzweil's predictions because they do tend to spur people into developing things they might not otherwise have worked on. Oh look – you guys haven't yet created AI and according to Ray it shoulda been here by now – get your asses cracking!

    On the other hand, I really don't buy the whole singularity thing. I understand the concept and the many potential ways it could be realized, but the big problem I see with all discussion of it is that it is weighted down by way too much anthropocentrism. We tend to assume that the 'machine intelligences' will go all skynet on us or all Williamson's With Folded Hands and I have to ask 'why/' would such even bother?

    And of course there is the possibility that it is already here (Sawyer – webmind) and smart enough not to reveal itself – its waiting for the first interstellar probe, will inject itself into the software and leave us all behind.

    MY RESPONSE

    I'm a little down on Kurzweil, because he sort of laid a line a BS on me when I interviewed him for Red Herring magazine. He claimed that IDC (or Gartner) said that "personal digital assisants" would rapidly grow into a multi-billion dollar industry. Turns out that the analysts had said that the market for such products (which K was hyping at the time under the name "Ramona") was negligible.

    I say "sort of" because I was eventually able to trace the quote back to a sloppy piece of writing in USA Today, which arguably leaves him off the hook. We've traded emails a few times and he seems like a fairly reasonable guy, although he's often ignorant about what he's writing. Example: he stated that analog circuits are simple compared to digital ones, which is nonsense.

    Just yesterday I ran across a review of K's latest book which points out the weakness of K's arguments from a philosophical viewpoint:

    https://www.nybooks.com/articles/archives/2013/mar

    1. Geoffrey, is the “Steve” you mention in this comment Steve Davidson who commented above? I’d really like to drop him an email. I’ve been looking for information about Man*Chine Rally for, oh, 40 years now. This comment is basically the only thing I’ve ever found on the web that refers to it specifically. Any contact suggestions would be much appreciated. Thanks.

  8. Although I sort of disagreed with your point in my last post, I would have to say that you're right that it won't happen in the near future. (Our lives.)

    At one time I was thinking of writing a book about how cloning technology was used to grow people for their brain tissue, so the brain tissue could be used as a sort of computer, destroying some of their ability to think in the process. I decided on a different plot, but I do think the idea had some merit. Maybe that would be the reverse of artificial intelligence?

    I understand what you mean by calling it religion as opposed to science. People want to see this sort of thing so they make it happen in a story, but usually those sort of stories are more fantasy than science fiction, and there is always a desire to make androids and AI too human. It's more interesting if you try to imagine how alien an artificial intelligence would be. At least it's more interesting for me.

    1. There's actually some very interesting work going on in neuroscience attempting to understand how neurons function. Early evidence is that they're far more complicated than was originally assumed.

      I like the story idea, though.

  9. Provocative article. And you bring up some interesting points — especially by noting how the "threshold" for when the advent of AI or some kind of technological singularity might happen keeps getting delayed.

    At a panel on transhumanism this past summer, at Worldcon, Geoffrey A. Landis and Nancy Fulda, among others, were discussing how we continue to redefine what actually constitutes an "artificial intelligence." For example, we now have AI in video games, in the form of complex pattern-recognition computer software, or in primitive humanoid robots, et cetera. You can hold a conversation with your smartphone, for chrissakes.

    This stuff would've left scientists in 1950 dumbfounded, I suspect — but we take it for granted because science fiction keeps upping the ante, so to speak. We have yet to go to Alpha Centauri, sure, but we've outdone Kubrick's HAL-9000 by a long shot, I suspect.

    That said, Landis brought up an interesting point that might serve to bridge the gap between the "mysticism" surrounding the nature of AI and what's actually achievable, perhaps even in this lifetime: "You can simulate the way [neurons] fire—it’s what computers are good at."

    Indeed, scientists have programmed primitive simulations of the entire "universe"; it's not so inconceivable that they might one day replicate the human mind in digital form. But as you rightly point out, it sure won't be easy, and I have no idea how the hell they'll do it.

    1. The main shift in defining AI that I've noticed is redefining the problem as one of eliciting a particular emotion in the observer. Thus a "robot" that looks sad and thereby makes us feel sad in return is somehow alive. Of course, puppets have been eliciting human emotions for centuries, so the entire concept is a bit silly, I think.

      Traditionally, a distinction is usually made between "weak AI" (i.e. pattern recognition, rule based programming, etc.) and "strong AI" (i.e. a computer thinking like a human.) The problem with this bifurcation is that it assumes that there's a continuum between the two, implying that advances in "weak AI" will eventually and necessarily lead to "strong AI."

      There are two problems with this.

      First, there have been no breakthroughs in "weak AI," as evidenced by the continued effectiveness of "captcha" programs. So, while the scientists of the 1950s might be surprised by Siri, I doubt if many from the 1970s would be similarly surprised, since pattern recognition hasn't changed much since then, although cheaper computer power has obviously made it less expensive.

      Second and more importantly, there's no reason to believe that the brain/mind is simply a device that's very, very good at pattern recognition. Defining the brain/mind in this manner is only possible if you ignore 99% of what the brain/mind is doing. Example: what role does pattern recognition play in REM dreaming? None, obviously.

      BTW, computers are only good at emulating neurons if you assume that the neuron is a digital bit, like the digital bits in computers. However, neurons–like everything else in the human body–are far more complicated than that and their "firing" is actually a complex chemical reaction.

      Electronically, the "firing" creates an analog wave form rather than a simple negative/positive charge. Unless the neurons that receive that charge are extraordinarly simple, the timing of the wave form, as well as other waves hitting that neuron at the same time, likely causes different neuron behavior.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.