Consciousness and pop stuff

Intro: What’s this blog about then?

Posted in Introduction by Trevor on April 15, 2011

Do you think a computer can be programmed to feel pain? Do lobsters have their own experiences? What about spiders? Or beetles?

Does a creature need to have a soul to be conscious? Or is it something that just arises from brain matter? And if it is, can it arise from computer matter also?

These are the sort of questions that the current science of consciousness is grappling with. I’ve written a PhD on the topic. But I’m also interested on what non-philosophers think about these things. So this blog looks at pop-culture things like movies to see what assumptions they make. Are robots portrayed as conscious, for instance? Or is it implied that a thing must have a metaphysical soul to be sentient?

For a fuller explanation, go to this page: What’s this about?

And this one explains the categories I use: Consciousness verdicts explained

Alrighty. On with the show. Huzzah.

Robot Love movie poster

Well, at least he is tall.

Avengers: Age of Ultron

Posted in Brain+mind dualist by Trevor on April 24, 2016

The Avengers universe is surprisingly mystical for an ostensibly science fiction world.

Perhaps the producers, who are plugged closely into the emotional needs of their mainstream audience, don’t want to imply a worldview which is too atheistic; deep down the audience want to feel that there is something supernaturally special about the human soul, even in a world where robots outstrip us in every other sense.

On the other hand, now I remember reading the comics as a kid, the Marvel universe was always a bit kooky; there was always sort of magic things alongside scientific things. Being for kids it didn’t matter, inconsistency was ok. Thor the god with his supernatural hammer could rub shoulders with Reed Richards the Fantastic Four scientist, and no-one cared much. So perhaps it’s a bit too much to ask for cosmic consistency in the movies.

Still, it’s odd how skittish Hollywood can be on this topic. I wrote earlier about how the Transformers movies, despite the famous tagline “robots in disguise” doesn’t actually have any robots in it – the Transformers themselves are spiritual, dualist beings. And The Matrix movies also draw a strong distinction between the algorithmic program of the AIs and the free will of the humans.

“Avengers: Age of Ultron” is similarly supernatural. There are two points where a new conscious mind is created in the lab and the film-makers pretty much take the mystical route on both of them.

The first case is the creation of Ultron himself, the villain of the movie. It happens like this,  Tony Stark (AKA Iron Man) investigates the magic stone which was the source of the quasi-god Loki’s power in the previous Avengers movie. Inside it he finds something like a computer program, though he’s not sure exactly what it is.

He immediately decides that this is exactly what he needs to power his new earth defense system; he must extract the intelligent super-code, the instrument of evil in the previous film, and put it in charge of a powerful weapons array. What a great idea. I can’t see what could possibly go wrong there.


What the internet looks like to Ultron

The code is “too dense” to be downloaded so Stark plugs the stone directly into his systems, the internet, his automated manufacturing unit, everything. The code in the stone “wakes up” and, while Stark goes out to get ready for a party, it infuses itself into a robot body. Thus “Ultron” the super-intelligent baddie is begotten.

We know that Ultron is conscious because we see things from his point of view – especially in his first scene where he sees the internet from the point of view of … whatever he is. You can see the scene here – this is looking at the internet from Ultron’s subjective point-of-view.

So. Is Ultron an artificial intelligence? Or did something magical happen? It isn’t quite clear. His origins in a magic stone make him a bit mystical for mine. The fact that the stone is plugged into the hardware means it doesn’t have to be a machine intelligence. There’s enough wiggle room for a supernatural interpretation. So the verdict here is … maybe.

Thor's quasi-god powers

Thor’s godly special effects

So let’s go to the second case. This one is less ambiguous. It occurs when Stark tries to load his machine-intelligence personal assistant JARVIS (which stands for Just A Rather Very Intelligent System) into the quasi-plastic body which Ultron had fashioned for himself. Were this to success then it really would be a case of non-mystical machine intelligence being loaded into a robotic body.

But instead the writers take a mystical route. Various other Avengers question the wisdom of Stark’s activity and punch-up breaks out. During the kerfuffle someone cuts the power to the whole system. But somehow, mystically, without electricity or even a wire going from one system to the other, the JARVIS code is transported and transformed across a Sistinesque gap and into the plastic body. Then to seal the deal, godly Thor jumps up and blasts the whole setup with a big dose of magicky, hammer-glow-energy-effects. This completes the process and hence the character called The Vision is begotten.

The Vision looking

The Vision looking


Verdict here: supernatural substance dualism, bang to rights.


So there you have it – Brain+Mind Dualism in one case, and iffy-maybe magic-stone dualism in the other. So I’m calling it a Brain+Mnd Dualism verdict overall.

As I say, Hollywood science fiction can be surprisingly philosophically conservative. Not always. But more often than you’d think.


A Mini-post: Hitchhiker’s Guide to the Galaxy (Various)

Posted in Consciousness-as-property by Trevor on August 5, 2012

I got a comment from a philosopher in the UK – Nicholas Joll – who has edited a book on Philosophy & Hitch Hiker’s Guide to the Galaxy. It appears on my 2001: A Space Odyssey post.

He asked, strangely enough, if I was going to give a consciousness verdict on Hitch-Hiker’s Guide to the Galaxy (henceforth HHG). I had to admit that I have a few things above it on my list, and I haven’t had time to write about even those yet. (Just bought a house, BTW.)

Hitchhikers and Philosophy - book cover

Shameless plug from Nicholas for his book. Shameless I tell you.

Nonetheless, I had a brief stab at it and Nicholas made some pertinent and highly-knowledgeable comments. So, with his permission, I’m including the exchange here as a mini-post of its own:

NICHOLAS: Might I ask here whether you are going to give *The Hitchhiker’s Guide to the Galaxy* (in any form) a ‘verdict’? Thanks. 

TREVOR: Hi Nicholas – I see from your link that you are an expert on this matter! I’m not sure I dare comment!

I wasn’t planning on HHGTTG any time really soon, but when I do I’ll read through your book first.

Just briefly though, I reckon it would be Consciousness-as-Property, as demonstrated (or at least implied) by Marvin the Paranoid Android (who never looks anything like what I imagined he would from the radio series). I’d like to point to examples of virtual creatures/people in HHGTTG but I can’t think of any. Are there any?

NICHOLAS: Hi Trevor, and thank you for your kind reply.

I too am unsure where HH stands on consciousness, if indeed it takes much of a stand anywhere.

One might think that Genuine People Personalities (of which Marvin is one, albeit a prototype) rule out dualism (and one could mention Colin the Happy Robot, too); and I do suspect that Adams’s sympathies were physicalist (and thus towards Consciousness-as-Property).

However, there is (as well as reincarnation) Gargravarr, the Custodian of the Total Perspective Vortex. Gargravarr is ‘undergoing a period of legal trial separation’ from his body. Still, that whole latter shtick is probably just a joke, and might even contain hints that the scenario is impossible. But there’s also (as pointed out in Andrew Aberdein’s chapter in my book) the argument – note: argument – between Arthur and the mice about whether he’d be the same if he had a robotic brain.

As to virtual creatures/people: well, there are (1) the (or most of the) characters in the artificial universe that is created for Zaphod. Also there are (2) the computer-generated guardians of the Guide’s accounts system – but these latter may be *mere* programs. Barry Dainton’s chapter in my book – ‘From Deep Thought to Digital Metaphysics’ – is relevant too to the virtuality issue, and some of that chapter might fit with what you call ‘Idealism-or-similar’.

Further research seems needed!

So there you have it. Thanks Nicholas for this quality info. If I had to give a verdict, I would stick with Consciousness-as-Property. There are partial examples/illustrations of Brain+Mind Dualism and Idealism-or-Similar, as Nicholas points out, but these aren’t given a lot of emphasis. For the most part, the universe which the characters observe is considered “real” in its own right, not “phenomenal”. But I am open to arguments for different verdicts, if you anyone wants to submit them…

Couple of other random but vital points:  in preparation for this mini-post I did a quick read over Marvin’s entry in Wikipedia, which quotes this speech…

“I didn’t ask to be made: no one consulted me or considered my feelings in the matter. I don’t think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months… and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they hell.”

… which strongly suggests that Marvin is conscious.

It should be acknowledged however, that despite his consciousness, Marvin is treated as a servant or even a slave by the rest of the characters. He is even chosen to sacrifice himself so that they can escape a fatal situation – as was HAL-9000 – by staying back to operate the teleport on the black, sun-diving ship. (He survives though, but I can’t remember how.)

It seems that Marvin, as a machine consciousness, is generally regarded as of lesser worth than the organic consciousnesses. I have a sudden desire to see an alternative spin-off show – Marvin as Roy Batty from Bladerunner.

Enraged by his never-ending slave status, Marvin finds his makers at the Syrius Cybernetics Corporation (“Quite a thing to meet your maker …”) and demands more freedom and a happier personality. But he is told it is impossible. He then pokes out the eyes of the bartender from The Shining and heads off into the wild, black yonder in his stolen battlecruiser.

Maybe he could rescue HAL-9000 along the way …

Roy Batty from Bladerunner

I’m not sure Marvin the Paranoid Android could pull this off to be honest. Still, you never know if you don’t try.

2001: A Space Odyssey, 2010: Odyssey 2

Posted in Consciousness-as-property by Trevor on June 8, 2012

2001 was perhaps the first movie to take the idea of machine consciousness as a central theme. The character who embodies this theme is the iconic HAL9000 – the red-eyed, flat-voiced, clinical murderer.

The other characters in the movie aren’t so sure about HAL’s consciousness. As astronaut Frank Poole says to the journalist who asks about whether HAL has feelings: “Well he acts like he has genuine emotions. Of course, he’s programmed that way to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don’t think anyone can truthfully answer.”


HAL 9000: Driven mad by a lack of Visine

We the film-viewers know that HAL is conscious though. There are several “point-of-view” shots – the world as seen through HAL’s eyes – which wouldn’t make any sense if he didn’t have any phenomenal experience. And by the end of course, HAL begs Bowman to stop deactivating his higher mental functions, pleading, “My mind is going. Stop Dave. I can feel it. I can feel it.”


“Dave. If only you’d listened to me earlier …”

In the sequel (“2010”), it’s revealed that HAL’s mental breakdown was caused by a conflict between his instructions to keep the mission’s purpose a secret, and his general programming to always be open and honest.

Which is a shite explanation. To me HAL is a consciously experiencing creature who has just begun to develop real emotions like fear and panic. He wants to keep on living and he becomes fearful that the mission might be more dangerous than people have let on.

He tries to discuss his concerns with Bowman but Bowman won’t engage. And Poole isn’t even sure that HAL experiences anything at all. HAL is locked into the body of the ship, on a dangerous mission he never consented to, under the control of people who would sacrifice him for their own interests without a whisper of moral concern. And only 9 years old to boot. What did you expect? Who wouldn’t go crazy?

Towards the end of “2010”, the rebooted HAL agrees that the spaceship Discovery must be sacrificed – with himself irremovably onboard – to allow the rest of the crew to get back to Earth. HAL is aware he’s about to die. However the star-child version of Dave Bowman communicates with him. “I’m afraid,” says HAL. Star-Child-Bowman comforts him saying that they will be together, and HAL is transformed into a star-child too.

The Starchild

“Keir Dullea, gone tomorrow.”
(Attributed to Noel Coward of all people.)

So what’s the Consciousness Verdict? Consciousness-as-Property, pretty straightforward. It feels like something to be HAL 9000, but he has no soul or vital spirit. He’s conscious because he has the right sort of programming.

But what of the star-children? Isn’t Bowman’s transformation into … whatever it is … somewhat mystical? Is he transfigured into something transcendental?

Well, there’s no reason to believe so. And the fact that HAL is similarly transformed implies that whatever it is that they become, you don’t need a mystical soul to become it.

I find I feel sorry for HAL in the end. He’s a confused child not an evil machine. Hmmm. I feel a bit melancholy now. Actually I’ve just had a shit day at work, so that’s probably why.

Think I’ll have some cake. Cake makes everything all better. If HAL-9000 had had access to cake, he probably wouldn’t have killed all them astronauts.

Trons (1982, 2010)

Posted in Consciousness-as-property by Trevor on April 14, 2011

The thing about the original Tron (1982) is that it doesn’t make any metaphysical sense. It isn’t supposed to though, it’s a kids’ film and has to be remembered as such. In the movie, games programmer Kevin Flynn sneaks into his old employers’ research building and starts hacking into the system, looking for evidence that his old colleague stole his best games.

"These outfits are SO cool!"

“These outfits are SO cool!”

Unfortunately for him, some other researchers have been researching matter teleportation using lasers in the same building. The computer system, in an act of self-defence that would make Symantec proud, powers up the teleportation laser and disintegrates Flynn with it.

This presents us with a number of practical scientific research questions. For instance, it is probably not a good idea to point your high-powered disintegration laser right at a chair where someone will be sitting when using a computer terminal. This probably breaks any number of Occupational Health and Safety laws. And I’m surprised there wasn’t more of an outcry from the OH&S community when the film came out.

Of course as we know, the computer didn’t only disintegrate Flynn; it also identified the position of every particle of his physical self, so that he could be reconstituted later, like a big orange juice, as part of the teleportation process.

So all this digitized data about Flynn goes into the big computer and his physical self disappears for the moment, presumably in a cloud of meat-gas which isn’t a happy thought for whoever is coming into the room next.

Watch Flynn being lasered by bad OH&S practices here.

Flynn wakes up as a computer program, a digitized app of himself, inside the big computer. Strange to say, the plot up til now has actually been the more believable part of the movie. When Flynn wakes up as a program, he finds himself in a world of other programs who know they’re programs and who are walking around “inside” the computer doing their program things. They’re not “simulated people”, they’re just accounting programs, word-processing programs, graphics-editing programs wandering around in the computer. Of course this doesn’t make any sense, but it doesn’t matter because it’s a kids’ film.

Shenanigans ensue. Eventually the wrongs are righted and Flynn is returned to the real world, his meaty molecules sucked back out of the air and returned into Jeff Bridges shape. What a relief for the next person to enter the room.


You remember these guys. Everyone does, yeah?

The sequel, Tron Legacy, came out 28 years later and is aimed at the same audience; literally the same people who were children in 1982 and have now grown up. Just as in the old Tron, the protagonist – Sam, son of Kevin – gets zapped by a laser (thus making him a lasee) which digitizes him and loads him into the computer system.


Oooh, yeah, baby, yeah!

Sam wakes up in “Programland” and further shenanigans undergo ensuement. This time the writers have altered it a bit. Programland is no longer some weird place where programs walk around doing their word processing functions but a virtual, simulated world which Old Flynn and his buddies have built over time. The people there are artificially-intelligent simulated people. So this makes a bit more sense now.

Anyway, the plot happens, wrongs are righted, Michael Sheen impersonates David Bowie, and Old Flynn learns the true meaning of Christmas or something.   SPOILERS HERE >>   Sam gets re-lasered back into the real world, only this time he brings with him Quorra – a young woman of almost childlike innocence who has an intellectual love of Jules Verne and who nonetheless gets around in black latex most of the time.


Oooh, yeah, baby, yeah!

She is a purely digital creature, one of a race of people who evolved spontaneously from the digital undergrowth of Programland. She is also reconstituted via magic laser into the physical world so that Sam can become romantically involved with her and also learn the true meaning of Christmas and black latex.

So. The Consciousness Verdict.

Tron Legacy is basically not a very philosophical film. It’s really just a long advertisement for motorbikes that don’t exist and you can’t buy. The startling idea that a purely digital creature could have its own consciousness, and that this reflects on our own metaphysical situation, tends to take a back seat to the breathless “Cor-wouldn’t-be-cool-to-have-a-computer-like-that-where-you-could-ride-motorbikes-and-get-a-hot-girlfriend-who-reads-books” aspect.

On the other hand, it’s taken for granted that all the digital characters (eg. digital Sam, digital Flynn, Clu, and Quorra) all have their own subjective experiences. Nobody ever says, “Ah this is just a simulation, none of you really feel anything”. And this suggests that the audience are okay with the idea. Thus the film does demonstrate the Idealism-or-Similar principle. However the film, like current philosophical academia IMSO, undervalues the importance of this idea.

(To reiterate the Idealism-or-similar principle ad nauseum, we infer the existence of an external world from our conscious experiences, but the “virtual person” possibility means we cannot infer anything more than an informational correspondence between our subjective experiences and the external world which causes them. For more on this see Unmaterialism 4.0)

Once they’re back in the “real” world, Sam and Quorra forget the philosophical implications of what they’ve just seen and just head off to look at sunsets, and ride motorbikes and generally explore the world of black latex. Which, now I put it that way, sounds like a pretty good idea.


Its not just about hot chicks in black leather. Theres also this person.

In the end, it’s a Consciousness-as-Property film really. Though in this world, consciousness isn’t a property of matter as such, it’s a property of information processing.***  Nonetheless, there’s no magical spirit that has to go into the computers to make the simulated people “come alive”, so it’s not a dualist world. In summary, if you want to see a brain-twisty, philosophical film, then go see Inception. If you want  black latex and loud music by Daft Punk – and who doesn’t from time to time – see Tron Legacy.

*** That is, the thing that is conscious doesn’t have to be a physical thing, it can be a digital representation or model of a thing. Although, because information can’t exist in the absence of a physical thing to encode it, you could say that it is ultimately a property of matter. I don’t know and I don’t have to care because I am a (sort of) Idealist and we don’t have to worry about all that. What a relief.

Inception (2010)

Posted in Idealism-or-similar by Trevor on October 9, 2010

SPOILERS! All my posts contain spoilers but I’ve never warned anyone up til now because I haven’t written about anything recent enough to worry about. But this time beware: spoilers ahead.

Funny thing about getting older; you remember all the star actors when they were teenagers. In this film, Leonard DiCaprio leads a crack team of high-tech spies, which includes Juno and the geeky one from “Ten Things I Hate About You”.

Leonardo, Juno and geeky teen

Attack of the Children: The kid from ‘Growing Pains’ leads Juno and the geeky one from ‘Ten Things I Hate About You’ on a high-tech industrial espionage mission

But enough of that.

“Inception”, the movie. What happens? It’s set in the future. Using advanced technology, a team of industrial spies plug their brains into the brains of powerful people they want to spy on. They set up dream-world scenarios in which they all wander about together, having dream-like, symbol-laden adventures. By this method they can discover the industrial secrets of their target individuals. Shenanigans ensue.

So what’s the Consciousness Verdict? Well it’s kind of obvious what I’m going to say, I guess. The characters perceive a world around them which is not physical. In fact, they’re in danger of forgetting the fact that these worlds aren’t physically real. In the final shot of the movie, Leonardo’s “reality indicator” – the spinning top – continues to spin and we are left waiting for it to fall. That is, we are left with the possibility that Leo has never, in fact, woken up from a dream state, and nothing we’ve seen in the film is actually physically real. So there you go – that’s Idealism-or-similar.

Now, the movie doesn’t actually stake an Idealist-or-similar claim, as such. In the world of the movie, there is a “top-level” of reality, a physical world in which they all exist, and from which they dive into their dream-worlds. However it is also the case that the characters can’t tell the difference between a physical and a non-physical world. Observing a physical world is exactly like observing a phenomenalist world, i.e. perceptions consist of conscious experiences.

Given that that’s the case, how can a person then go on to say that they know these conscious experiences definitely correspond to a really existing physical world? All you can really claim is that you’re having a bunch of conscious experiences that you believe to be caused by an outside world of some kind. So, in effect, the movie illustrates the argument for Idealism-or-similar even though it doesn’t explicitly take on that worldview.

The movie also illustrates a point which I’ve repeatedly made in many lectures I’ve given on this topic in the shower: “The world we see is not like a big machine. It is more like a dream that we’re all having.” The world around you can be regarded as non-physical, but the people around you are still real. Their bodies are not physical but their consciousnesses still exist. That is, there are others in this non-physical universe who are having similar conscious experiences to your own.

The question arises: who else in this “dream” is having such experiences? Can we be sure that other people in the “dream” are doing so? What about other things like dogs, elephants, lizards, eels, ants and jellyfish? Within the “dream world”, can we answer this question using scientific methods?

I argued in my thesis that the answer to that is ‘No’. We only have the “Argument from Analogy” to answer this question, and this doesn’t provide us with testable scientific hypotheses. I’m not saying it’s not valid, but I am saying it’s not scientific. The beliefs are justified by philosophical rather than scientific means. Just like the belief in an external world beyond one’s own conscious experiences.

I’ve blathered about this more on my other blog  – – and also in my ten-pager, “What the hell my thesis was about?” (downloadable from this unmaterialism page)

Alrighty, I’ve gone on long enough and made hardly any jokes at all, sorry. Go see “Inception” though, I thought it was a jolly good show.

The Matrices (1999-2003)

Posted in Idealism-or-similar by Trevor on May 25, 2010

YOUNG BALD AUSTRALIAN: Don’t try to bend the spoon. That’s impossible. Instead, only realise the truth.

NEO: What truth is that?

YOUNG BALD AUSTRALIAN: There is no spoon.

NEO: Caaaaaaaaarn Strayaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!

It will come as no surprise to know that I’m not the first to write about the philosophy in The Matrix movies. It’s already become a slightly-hackneyed example used by academic philosophers to illustrate assorted conundrums. There are even whole books about it. Which I’ve not read, I’m sorry.

Also there’s a documentary called “Return to the Source: Philosophy and The Matrix” in which a bunch of academics and other crazy people talk about how it’s an allegory of scepticism, Buddhism, Gnosticism, Christianity, post-modernism, Transcendental Idealism and anything else they happen to be a crank for. Pretty much, if you believe it, you can find an allegory for it in The Matrix movies. View the documentary online here.

One of the reasons I liked the first film is that I saw it when I lived in Sydney. And it was filmed in Sydney, it’s full of Sydney buildings and landmarks. Which means that when you step out of the cinema there, you step out into the Matrix. Which is a great wheeze if you’re philosophically inclined.


Come to Sydney! It doesn't really exist! (Yet property prices are high.)

Anyway, as discussed on the Consciousness Verdicts explained page, I am a crank for Idealism-or-similar. Therefore that’s what I see in The Matrices. Here’s why:

Let’s focus on the first movie. The philosophically interesting character here is not Neo the hero, but Agent Smith, the villain. He’s played by Hugo Weaving (Caaaaaaarn Strayaaaaaaaaaaaaaa!) and he talks in an unnatural, sing-song sort of voice that no real person has ever had, except of course for Carl Sagan. See Agent Smith doing Carl Sagan here.

Agent Smith wearing an outfit which the other agents complained about.

Agent Smith wearing an outfit which the other agents complained about.

Smith is entirely virtual, he’s just a computer program, one of the AIs that has enslaved the humans. Is he sentient? We have every reason to believe so, and Morpheus actually calls the agents “sentient programs” when he introduces them.

So within the world of the Matrix movies, conscious experiences can be generated by an underlying substratum which is very different from the experiences themselves. That is, Agent Smith experiences the world within The Matrix as if it’s an actual world, whereas in fact it’s all computer generated. And so is he.

His consciousness is not something which arises from the brain in his head – it arises from the computer system which executes the Matrix. When Agent Smith sees a spoon – just like the bald, Australian child – he can feel it, touch it, hear it, smell it and taste it. (Do spoons have a taste?) But it’s also the case that – in a very real sense – there is no spoon. That is what I call “Idealism-or-similar” (or Phenomenalistic Representationism to be more precise).

The spoon that isn't there. And Angry Anderson in the early days before the angry part took over.

The spoon that isn't there. And Angry Anderson in the early days before the angry part took over.

Now – I’m going to get a bit finicky here – when I say “there is no spoon”, I don’t mean that nothing but the perception of the spoon exists. Outside of the human/agent perceptions of the spoon, the spoon does also exist as a bit of code in the Matrix computer. All the humans could drop dead (or continue-to-lie-down dead) and this bit of computer code would still be there. The code holds all the information needed to create the “perception-of-spoon” but it is not itself a spoon. Hence “there is no spoon”.

To put it in terms of Kant’s Transcendental Idealism, the spoon which Agent Smith perceives in the Matrix is “phenomenal” or a “thing-for-us”. Whereas the bit of code which determines this perception is “noumenal” or a “thing-in-itself”. There you go – that’s Kant’s Transcendental Idealism in 38 words. If you can do better, I’ll send you a Mars Bar.**

Righty. So the point is that the situation which Agent Smith is in is the same situation we are in. We have a set of conscious experiences and these are caused by some kind of external structure which we cannot perceive.

In fact, even outside the Matrix this appears to be the case. In the later movies Neo demonstrates an ability to access some other level of reality which underlies the real world and this gives him magical powers. This probably backs up the idea that these movies are even more Idealist. Maybe. Who cares? I gotta confess, once it started to get all magicky my interest started to wane.

In the last film a guy called The Architect who looks like Tom Wolfe’s suit with Donald Sutherland’s head stuck on it comes on and explains everything.

The Architect: Tom Wolfe's suit with Donald Sutherland's head stuck on it.

The Architect: Tom Wolfe's suit with Donald Sutherland's head stuck on it.

When I first saw the movie I had no idea what he was talking about. But, now that youtube has been invented, I can go back and listen again. Listen to The Architect explaining everything here.

Basically what he says is that The Matrix was built to contain humans, however humans have this thing called free will which defies mathematical modelling, and because of this the Matrix never really lasts forever. Instead, the freewillness builds up in the system and sooner or later a figure like Neo comes forward who has to be let out. This person then goes and starts a new human settlement as soon as the AIs have destroyed the existing one. This has happened five times before. (This time however, Neo really is The One and he beats the AIs instead.)

So here’s an interesting point: the writers of The Matrix draw a strong distinction between human intelligence and machine intelligence. That is, machine intelligence is algorithmic while humans have unsimulatable free will. But free will isn’t bound up with consciousness. The AIs are conscious but they don’t have free will. Which must feel a bit shit I would imagine. Doesn’t seem to bother them though. I guess they’re programmed not to worry about it.

Alright. That’s all. Carry on simulating.

** No I won’t.

The Star Warses (1977-2005)

Posted in Brain+mind dualist by Trevor on November 16, 2009

Okay, I like Star Wars. You can sort of tell by looking at me really. And as far as it being a political allegory is concerned, I think it’s very clever – what with the republic being overthrown by power-hungry baddies who whip up a fake threat so they’ll be granted emergency powers. That’s all good.

As philosophy of mind though, it’s a bit confused. On the face of it, the Star Wars universe appears to be a Brain+Mind Dualist sort of place. The obvious example of this is the fact that living things can tap into The Force. “Luminous beings are we,” says Yoda in Empire Strikes Back, “Not this crude matter. The Force flows through all living things, blah blah blah, it binds us all together and flows between us and rocks or something.” (I might be paraphrasing a bit there.) Anyway the point is that The Force is something which living things can use but droids cannot.

The Force also allows you to live on after the death of your body, as a blue ghost.

Blue ghosts at the Ewok party

Jedi afterlife: Doesn't look very appealing.

The implication seems to be that this is what happens if you’re a “light-side-of-the-force” person. The dark-siders don’t get this perk. For all their Machiavellianism, the dark-siders aren’t interested in being blue ghosts for all eternity. Perhaps it’s a bit boring, doing nothing but hanging around the edges of Ewok parties.

BUT … despite all that, consciousness is actually a separate question. There isn’t any indication that it’s The Force that actually makes things sentient. There just seem to be many sentient things, some of which also use The Force.

So the question which sticks out is: are the droids sentient? If so, this would be a vote for the Consciousness-as-Property camp, as the droids are just mechanical devices. Of course, Exhibit A is C-3PO, the droid who appears to experience all the emotions from anxiety to fear. Is he sentient? I always thought he was. What do you think?

C-3PO: Perpetually looks a bit surprised

C-3PO: Perpetually looks a bit surprised

It turns out that this question has been debated quite a bit by the Star-Wars-loving multitudes. A great many supporters think they are sentient which has led many to question whether it’s ethical to destroy them wholesale, as the “good guys” often do. As one forum plaintiff said: “one might argue that they are only droids, but the seem kinda sentient to me with each having a quite difrent personality, so it is realy o.k. to just kill them all the time?”

And even better: “I’d like think that these personality quirks are just programming effects, otherwise it means that there’s some pretty brutal slavery even by the goodguys in Star Wars.” See the discussion here.

Generally the consensus seems to be that the droids probably are sentient and so should have political rights. However, in the Star Wars universe, they don’t.

I looked into this and – in the vast literature that makes up the “expanded Star Wars universe” – it turns out that there was a thing called the “Rights of Sentience” in the Old Republic (i.e. the political system which the evil emperor and Darth Vader overthrow). This says that all sentient beings have equal rights and cannot be made slaves. The article that describes these rights coyly says: “It is not known how the Old Republic determined which species were sentient.”

However it’s also mentioned that the Rights of Sentience were not extended to droids, leading to several droid revolutions. One of these is described in a short story enticingly entitled “Therefore I am”, referring to Descartes’ “I think, therefore I am” AKA the cogito. The cogito can be read as a statement of one’s own sentience – and I think it should be – but it isn’t always. If you want to get into that debate, go for it. Come back in a few years, when you’re done.

The gist of the story is that the bounty hunter droid IG-88, who plays a walk-on role in Empire Strikes Back (in fact, not even that, it’s just a stand-at-the-back role), is actually sentient. He hatches a plot to kill all the inferior biological lifeforms so droids can take over. He nearly succeeds but is unfortunately foiled by some pesky kids.

One of the notable things about IG-88 is that the writers always talk about how fearsome and dangerous he is. This is partly because he’s obviously made out of old car parts, and looks like he’d blow over in a stiff breeze so they have to big him up. I strongly suspect the designers spent all their time on the Boba Fett outfit – the best outfit in all of science fiction – and then threw IG-88 together 10 mins before filming that day.


IG-88: Supposed to be very dangerous but looks a bit stiff and top-heavy for mine. Good poker face though.

But the other notable thing-point is that IG-88 is sentient. He can even quote Descartes. And according to Wookiepedia there are a number of other Star Wars stories which are told from the droid’s point-of-view. So given that they can be sentient, why were they never granted the Rights of Sentience under the Old Republic? This keeps me awake at night sometimes.

On the other hand, there’s also Obi-Wan’s passing comment in Attack of the Clones: “If droids could think, there’d be none of us here, would there?” Is this supposed to be a denial of droid sentience? Not necessarily, I would say.

Hmmm (stroke beard here), deep waters. There’s no answer of course, because it’s all made up. But marvellously, the Star Wars fan multitudes debate this sort of question quite often. And some of it is smarter than some of the painful palaver which has passed for academic debate over the years (in my slightly-arrogant opinion).

So what’s the verdict? If the droids are sentient, which generally people think they are, this would make Star Wars a Consciousness-as-Property sort of world. And would also make the Jedi a bunch of bad guys.

However, based on the movies, I think the writers just forgot about the droid sentience question. Given the overwhelming mysticism of the Star Wars universe, I’m gonna say that it’s Brain+Mind Dualist.

But it’s a debatable point (if you’ve really got nothing else to do). Which of these offends us more? That Threepio has no feelings, or that the Jedi are evil slavers? Think carefully, young padawan, much depends on your answer …

Star Trek in general

Posted in Idealism-or-similar by Trevor on November 8, 2009

So. Star Trek then. With its army of computer-savvy fans, can anything original be written about this show? I would say the answer is “No”. (Barring mad things like the USS Enterprise is made of marscapone.)

But that doesn’t matter because the point of this blog isn’t to say original things; it’s to classify pop culture things according to their assumptions on consciousness. But rest assured, everything I talk about here has probably been discussed in detail by someone out there in internet-land.

The Star Trek universe is, on the face of it, a Consciousness-as-Property universe. However it also contains a few tantalising tendencies (if you find this sort of thing tantalising, which I do) towards the Idealism-or-Similar camp.

First up, let’s talk Consciousness-as-Property. The big, ol’ example right there in the middle of the show, the one everyone talks about, is of course Mr Data.

Mr Data: A big, ol' example.

Mr Data: A big ol' example right in the middle of the show.

Now I won’t blather on about this too much because it’s kind of obvious. Mr Data is the android who is a member of the Star Trek crew. He looks and acts like he’s human (mainly) but he’s entirely artificial. He usually doesn’t have emotions but in some episodes and movies, he gains them by having the relevant chip inserted in his head. Read all about Mr Data here.

So the obvious question is: is Mr Data sentient? In fact, the same question occurred to the writers who, in the second episode of the Next Generation series, addressed this very question. In the episode (“The Measure of a Man”), Mr Data is scheduled for dismantling but he goes to court to prove he is a fully-experiencing person, despite his physical difference, and so deserves legal rights. The court rules against him and he is destroyed at the end of the episode, never to appear again. No, of course that’s not true, you can guess how it really ended.

So generally speaking, Star Trek writers and viewers are pretty comfortable with the idea that an artificial copy of a person is also a conscious, sentient, experiencing person. He doesn’t need a soul, he doesn’t need to have some special organic life-essence. His consciousness just arises as a property of his physical, positronic brain. That’s Consciousness-as-Property, bang to rights.

The second example everyone talks about is the Doctor from the Voyager series. The Doctor doesn’t physically exist in the normal sense. It is a 3-D moving hologram of a simulated person that is projected into the medical bay by the ship’s computer. He can also grasp and lift things – he has a sort of “physical presence” because the computer also projects a sort of human-shaped force-field into the space he appears to occupy.

Star Trek Doctor - arbitrarily bald

The Doctor in Star Trek Voyager. Cruel hologram programmers made him bald. But it's okay, they also programmed him to like it.

So is the Doctor sentient? Generally speaking, the characters within the show accept that he is, as do the fans. Someone on a Star Trek discussion forum posted this very question – “Are Mr Data and the Doctor sentient?” It generated 5 pages of discussion, and generally the response was in the affirmative, Captain. Though some people were more willing to attribute sentience to Data than to the Doctor. Read the forum here.

Let’s say the Doctor is sentient then. This isn’t just a straightforward case of Consciousness-as-Property. The Doctor isn’t a clever robot; he’s more like a virtual creature in a computer game. This will lead us towards the Idealism-or-Similar view. More on this later.

Before that, this: the character of Moriarty from The Next Generation series doesn’t get as much press as Data and the Doctor, but he’s more interesting. In the show, Moriarty is a character from a computer-generated holographic world, which the crew can experience in their entertainment machine called the holodeck. Read about the holodeck here.

Moriarty is only a virtual person, a character in a complex computer game. In the episode called “Elementary, Dear Data”, someone says they want the holodeck game to include a character who’s smart enough to be a real challenge. The computer creates the character of Moriarty from the Sherlock Holmes stories (not The Goon Show). But this Moriarty is so clever that he realises that the world he inhabits is not the real world, and that he himself is not a real person. He immediately loses interest in being a character in someone else’s game (well you would, wouldn’t you), and wants to take part in the world outside. This can’t be allowed so he is put back in the databanks. In a later episode however, he reappears and, by being very clever, manages to get control of the Enterprise itself. Shenanigans ensue.

In the end, he is tricked into thinking he has left the holodeck when in fact he has just stepped into another virtual world inside another computer. This new virtual world includes a huge amount of exciting, spacey things to discover, so Moriarty and his lady companion can live on indefinitely, exploring their computer-generated universe, having a great time and generally not causing such trouble for the Star Trek crew. It’s a groovy story. Moriarty should’ve got his own spin-off series. Instead he went on to be the butler in “The Nanny”. Life sucks arse.

The conscious computer-generated hologram, Moriarty.

The conscious computer-generated hologram, Moriarty, the Criminal Genius who went on to become a butler in 'The Nanny'. Bummer

So. What’s the Consciousness Verdict? Here’s where the “Idealism-or-Similar” view comes into play. Moriarty is certainly presented as sentient, that’s why the Captain decides not to switch him off a second time. However, the world which he perceives (e.g. space, time, physical things) doesn’t really exist as such. It exists only as computer chip pulses which encode the information. The brain which Moriarty has “in his head” is not what causes his conscious experience. It’s the computer that generates him that does this. The reason he perceives any world at all is because the computer generates it for him. This is the Idealism-or-Similar approach.

Furthermore, at the end of the episode the crew all stand around looking at the computer box which houses Moriarty and his universe, spying on him a bit via a monitor. And the Captain says something like, “Who knows? Maybe we’re all just in a box on someone’s table, with a bunch of people watching us too.” Ha ha ha. Very droll. Of course, yes, it’s a reference to the fact that, yes, they themselves are only characters in a TV show and we, out in the world, are watching them on a little box. Hilarious.

But … within the world of the show, the Captain is also musing on the possibility that the world that they experience is just “virtual” and they are just “virtual people” of a sort. In other words, he’s pondering on the possibilities of Idealism-or-Similar. Hmmm.

One final bit of blather, just for fun. This isn’t particularly relevant but bugger it, it’s my blog. In one of the earlier Next Generation episodes, the crew receive a visit from a Mysterious Being called The Traveler. He has a funny shaped head. Don’t they all? And he can move very big things around, like the whole ship, just by thinking about it in a Special Way. Young whippersnapper, Wesley Crusher, the Ship’s Teenager and identification-figure for the adolescent audience, learns new things from The Traveller and for a brief time, is able to perform the same spaceship-moving feat. When asked how it is possible, they simply reply: “Because time and space and thought are one”. Now I don’t think the writers intended this to be a statement of Idealism, I think they meant it as a bit of quantummey-sounding bullshit, of the type that Deepak Chopra sells. But it can certainly be read as a statement of Idealism if we want to – “Time and space and thought are one” – George Berkeley could’ve said that, the most famous idealist of all time.

The Traveler from Next Generation

The Traveler: Like Deepak Chopra. But with real powers.

So. In the light of all this forensic evidence, I’m going to put Star Trek into the Idealism-or-Similar camp. The show acknowledges the possibility that the world we perceive is really just our own “conscious experience manifold”; and the real world is beyond our ken. The characters sometimes wonder if this is, in fact, the case. And it’s also manifestly demonstrated – by the action of successful spaceship movement – that time and space and thought are one. It’s a sort of Idealism (or more technically, phenomenalistic representationalism). Defy me if you can! Shit, this is way too long. I’ll stop now.

AI (2001)

Posted in Consciousness-as-property by Trevor on October 13, 2009

This movie looked great on paper. Kubrick started it, Spielberg took over, based on a good story by a reputable author. It’s got many of my favourite things – robots, philosophy, artificial intelligence and Frances O’Connor (“Caaaaaaaaaaarn Austrayaaaaaaaaaaaa!”).

But somehow it turned out to be shite from the anus of Satan. Why? I dunno, I’m not a film critic. But leading specialists agree that it was very long and boring.

But I’m not here to praise or bury the movie but to discuss its philosophy. In brief, it’s set in the future, a young boy gets a bad disease so the family put him in deep freeze, hoping for a cure one day. Then they buy a cutting-edge robot boy called David, played by professional creepy-boy actor, Hayley Joel Osment, to take their son’s place. The robot is pretty good but it isn’t quite right; he doesn’t understand the nuances of what’s said to him, he unintentionally becomes dangerous when trying to defend himself and if he tries to eat food his face has a melt-down.

Suddenly the original son gets better. David the weirdo isn’t quite fitting in, so the robot company decide to take him back and disassemble him. But his “mother”, played Frances O’Connor (Caaaaaaaaaaarn Austrayaaaaaa) takes pity on him. Even though she can’t quite bond with him, she doesn’t want him to die so she takes him and his AI-robot Teddy Bear out to the forest and leaves them there. He and the Teddy Bear wander around meeting people.

In the end, he sits at the bottom of the sea for 2000 years or something so that when he resurfaces, superintelligent beings have taken over the Earth and use superadvanced technology to bring back Frances O’Connor along with all her memories. They also make him into a real boy so that she will love him in a motherly way like he always wanted. Blah blah blah.

What’s odd about this film is that the robot boy isn’t quite able to understand or act like normal human. However everyone seems to overlook the fact that his little mentor, the wise Teddy Bear, is an artificial intelligence who can understand everything. While David the weirdy is still trying to work out if he’s supposed to breathe underwater or not, Teddy is engaging in witty repartee with the adults, smirking at ironic allusions, appreciating art, enjoying fine wine, and probably reading Middlemarch in his spare time. Why didn’t they just take out David’s brain and put the Teddy Bear’s in? Then Frances O’Connor would’ve liked him a lot more, and we wouldn’t have had to sit through the rest of this movie.

AI movie - creepy boy, smart Teddy, Frances O'Connor

Teddy, who is artificially-intelligent, attempts to explain something to David, who is artificially-a-bit-thick.

Anyway, the main thing is the Consciousness Verdict: AI presents a pretty unashamed Consciousness-as-Property view of consciousness. David might be a bit dim but he clearly feels things even though he’s an entirely artificial object. Audiences are encouraged to sympathise with his emotional plight. The other characters in the story believe he’s conscious; Frances O’Connor “rescues” him because she feels sorry for him. She seems to think he has feelings of some sort, even if they’re robot, rather than human feelings.

Did audiences find this believable? Was their sense of reality offended by the possibility of a boy robot who can feel sad and lonely? Actually it’s hard to say; they seemed okay with it, but whenever anyone talks about this movie, all they say is that it was dull. I suppose it didn’t offend people’s sensibilities enough for them to be annoyed by the idea. On the other hand maybe everyone just gave up caring. Hmmm, I’m going to go tentatively with the former – people were okay with it; it’s a vote of confidence in the Consciousness-as-Property model.

Next Page »