Your Friends & Neighbors

Reading Time: 5 minutes

Michael Furguson talks about the marriage between Man and Machine

Michael Ferguson and I are about to have a conversation, despite the fact that he’s up in Salt Lake City right now and I’m at my desk on campus at UVU, all thanks to the telephone. After that conversation, I’m going to remember every single thing that Michael has said, despite the fact that I usually can’t even remember what happened last night, all thanks to my pocket recorder.  The fact that you’re reading this means I can communicate ideas and opinions to a range of people without assembling them all together, juggling schedules or straining my voice. For most of us, this is humdrum, quotidian kid stuff. But for Michael Ferguson, the founder of the Transhumanist Alliance of Utah and a PhD candidate at University of Utah’s neuroscience program, all of these little technological helpers are present-day proto-examples of a fast-approaching future when man and machine melds together – a time commonly known as The Singularity.

For starters, why don’t you explain what the Singularity is and what it entails?

The Singularity is a theoretical future point at which the changes in technology and the human interface of technology are going to evolve so rapidly and the paradigm shifts are going to be occurring so ferociously that the only way for humans to keep up with it will either be to somehow merge with technology, or to just relinquish the effort to keep up with the paradigm shifts and delegate that to strong, artificial intelligences. Hypothetically, these [artificial intelligences] will emerge by that point and will have general intelligence that exceeds what [intelligence] human beings will have evolved to embody.

Ray Kurzweil (Renowned inventor, Futurist, and the author of THE SINGULARITY IS NEAR) says that Singularity will happen by 2045. Are there different estimates as to when it will occur?

There are definitely different estimates as to when it’s going to occur. Kurzweil is the most optimistic about an accelerated timeline for Singularity.

This estimate is based on Moore’s Law (which describes a long-term trend in the history of computing hardware), yes?

Exactly. Here’s the thing, though: Moore’s Law, where you have a doubling of processing speed every year or so and some projections that even the rate doubling is also increasing, is only a fraction of the formula. Artificial intelligence is way more than just rapid processing. You have to have an accompanying development of the software side of things, of intelligence theory and of informational theory. Right now we don’t even have a framework for thinking about some of these problems. It’s a little bit oversimplified to base these projections on Moore’s Law itself.

What is the world going to look like in 2045, if we reach Singularity?

What’s a lot more likely to occur is just a continual progression of adapting to new technologies as they emerge. Just a couple of years ago, no one had the Internet in their pocket, and now that’s a pretty standard commodity. It’s a pretty radical shift as far as information access, but it’s not like everyone’s freaking out about that suddenly we have perpetual access to the world’s information base. I think that a lot of these things are going to happen in a way that’s going to feel more natural than a lot of futurists talk about. Right now, for example, we already have brain implants. Cochlear implants are a prime example. People with a hearing disability are able to get essentially a computer that interfaces with their brain in order to translate physical signals into neural impulses that their brain then interprets as sound.

So, this is already happening then?

Yes! Like I said, [technology] is going to progress in a way that’s going to feel really organic. I predict that 20 or 30 years down the road there’s going to be a lot more of a transhuman type of aesthetic than a Singularity type of aesthetic.

What is the difference between Transhumanism and Singularity?

First, I have to preface that these are generalizations that I’m making, although I think they’re accurate generalizations. But Transhumanism is a technologically-driven philosophy which says that now we’ve evolved to a point where we have these rational intelligent capacities, we get to participate in the evolutionary process and that we’re far from maxing out our capacities. Transhumanism basically asserts that there is no reason we should be limited by our biological evolution, in the realms of cognitive abilities or in the realms of longevity and life span, or in the realms of physical capacity and strength.

Not to sound too much like science fiction, but this sounds like the concept of putting your brain into a robot body.

Yeah, that’s definitely one way of looking at it. People on the Singularity side of things tend to be more focused on the development of extra-corporeal artificial intelligence. In other words, you’re creating artificial intelligences that are totally distinct and separate from human beings. Some people go so far as to talk about these artificial intelligences taking over leadership and decision-making because they’re going to have so much more foresight as to cause and effect outcomes.

It does seems as though there is a lot of alarmist thought about what the Singularity is, and they seem sort of based on movies like TERMINATOR or BLADERUNNER – this idea of human obsolescence and machines taking over. How would you assuage these fears?

First of all, I would say that all of these technologies are developed by humans and we’re developing them for purposes that fit in with some desirable outcome for humans. It’s a pretty radical break that we’re going to get to a point where these sophisticated machines that we’re creating to serve human beneficence are going to somehow attain a consciousness of their own and realize that we humans are just these annoying parasites. It’s really hard to do artificial intelligence programming and simulation.

The most sophisticated artificial intelligence right now is IBM’s Watson, who just beat Ken Jennings on Jeopardy. Yeah, it’s really cool, but in way it’s just an awesome Trivial Pursuit robot. With the realities of how slowly and how metered it is for scientific progress, technological progress, and software development to unfold, those scenarios where somehow the artificial intelligence attains a consciousness of its own and somehow subjugates its programmer just seem pretty wildly unrealistic.

So, there’s not a huge chance of rogue cyborgs taking over?

Keeping grounded in the fact that these are software algorithms and technological outcomes created by humans, I just don’t see a mechanism for this type of cataclysmic subjugation of humans by artificial intelligence to occur.

And what are differences between artificial intelligence and actual intelligence?

These are questions that professors have been studying all their lives and just duke the hell out of each other, fighting for their particular pet theory. I wish there were some sort of clear-cut precise answer. It’s interesting. I think a lot of it is going to come down to neuroscience. As a neuroscientist myself, I’m of course biased to make a statement like that. But a lot of that question comes down to the nature of the human organism.
At the end of the day, are we really really sophisticated machines that just run off a carbon-based platform? Because if we are, if that’s “all that there is” to our identity and cognition, then there’s clearly going to be a point in time at which we’re going to be able to replicate that with a really sophisticated degree of emulation. However, on the other end, we are dualistic in some way, whether that dualism is a sort of a Buddhist paradigm of a mind and a soul, or a Christian paradigm of a spirit, a breath of life, then that’s a game changer. And I think a lot of people would be surprised to know that it’s actually not an open-shut question in science. This a very divided issue in the field of philosophy of mind. You have very smart, mainstream philosophers of mind who argue stridently for a dualistic paradigm of the human brain, the foremost of which is probably David Chalmers.
As far as how to think about the potential for machines to really attain intelligence in the way we think of it with human beings, I think it’s going to come down to the ultimate question of whether or not there is a dualistic quality to the human organism.