Topic: 2001: A Space Odyssey

We're joined this week by the sexy and knowledgeable man of science Serge Delpierre.

This is another one of those episodes like The Phantom Menace and Wild Wild West (and to a lesser extent Full Metal Jacket) where I went in thinking one thing and my co-panelists blew my mind. Let them blow you next.

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Loved this podcast, guys. Truly outstanding.

About halfway through, Trey mentioned neural networks, and it reminded me of something I read once. I can't remember whether this was in a novel or what — so it might be nonsense somebody made up — but it piqued my imagination anyway.

Story goes that somebody somewhere (I'm doing this from memory, so it's gonna be really vague) was working with genetic programming and FPGAs. FPGAs are these little general-purpose microchips that can be configured to do different things. This experimenter set up an automated test rig. It would start with a randomly configured FPGA, send it an input signal, then measure the output. Then it would tweak the FPGA in some slight but random way and measure the output again, with the goal being to get the chip to give a predetermined output from a given input.

Then he let it run over a weekend or something, and came back Monday morning to check out the results.

The rig had come up with a handful of FPGA configurations that would give the correct output. But when the experimenter looked closely at them, he saw some truly bizarre things. Like one of the configurations included a current path that wasn't actually connected in any way to the main path through the circuit. Thinking it was just redundant, he removed it and tested the circuit … and it didn't work any more. Turns out there was some hideously subtle thing going on with induction between adjacent current paths or something — I'm not an electrical engineer so I didn't understand the details, but the point is the circuit design made basically no sense from an engineer's point of view, but it worked. It took advantage (I guess you could say) of these weird secondary effects in ways that nobody would ever have designed it to do, but because the test rig wasn't set up in a way that excluded those kinds of designs, that circuit passed. In other words, nothing explicitly prevented it from "surviving," so it survived.

From what I understand, neural networks and such are built around similar ideas. If you start with the premise that you don't care HOW it gets done, and build in some mechanism to introduce variations and then another to cull variations based on some criteria, then you can get things you never expected.

That's why I don't have a problem accepting things like HAL in fiction. I don't even consider HAL to be a magic bean; certainly no computers currently in existence even vaguely resemble HAL, but we don't understand enough about how computers could be built to declare that HAL is impossible. If the movie asserted that HAL ran on the power of love and could make cupcakes appear out of thin air, that'd be magic. But the idea that a computer might be designed fundamentally differently and thus do fundamentally different things … that doesn't bother me.

Anyway, I'm rambling. But I just felt like spouting off a bit, since this week's podcast left me so full of ideas and thoughts and stuff. Which I guess is my way of saying thanks for being interesting.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Glad you enjoyed it sir, and thanks for the feedback. I have less of a story problem with HAL now that we've had the conversation we did on the show, but your post actually makes a clearer point for me.

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

You know, while we're on the subject …

I've loved "2001" ever since I was a kid. I have no memory of seeing it for the first time — I was born in '72, so obviously it had to have been on TV or something. But at first, it was just the "wow, cool" factor. As I've gotten older and more overthinky, I've come to really admire how much depth the movie has. It's open to a variety of interpretations, which makes it fun to watch and reflect on and think about. There's just so much THERE, in an intellectual, pondery, puzzley sort of sense.

Take this for example: HAL is really the only character in the movie who's motivated. I mean in the film-school sense. He's the only character in the film who actively wants something — to carry out his programmed mission — and acts accordingly. Every other character is purely reactive. The apes at the beginning of the film are starving; they'd have died out had the monolith not showed up. Floyd doesn't really do much of ANYthing in the movie. He just responds to the discovery of the monolith on the moon by lining up for a photo-op. Even Bowman, who's arguably the film's protagonist opposite HAL as the antagonist, acts only in response to other events. He only proposes disconnecting HAL after HAL's fault prediction turns out to be wrong. He only goes through with it after HAL freakin' MURDERS his shipmates and threatens his own survival. Really, the only initiative he takes in the movie is choosing to board the pod and visit the Jupiter monolith … until you remember that he's basically following the orders given by Floyd in the briefing video.

So consider this: We have the apes, sitting around literally waiting to die until the monolith shows up. Then we have man at the end of the 20th century, master of the globe and near-Earth space, complacent and idle until the second monolith is discovered. Then we have Dave Bowman — who is the personification of humanity — content to eat his earth-toned paste and watch the news through the whole movie. Nobody actually DOES anything on his own in the whole film … except HAL, who again, is driven only by his programmed-in desire to follow orders.

Viewed through that lens, it's really kind of a misanthropic movie. It says that humanity just sits on their asses until some external event forces them to respond. That in the big picture, we're all just apes cowering under rocks and gazing up at the moon in a depressing parody of comprehension.

But we can also think of it in exactly the opposite way. The ape-like pre-humans in the first act have gone as far as they can go under their own power. They're gatherers and scavengers, and they're surviving, but only just. The monolith comes along and gives them a little nudge, and on the strength of JUST that little nudge — hey guys, antelope thighbones are heavy — humanity explodes over the whole Earth and out into space. The match cut from the bone to the satellite is probably the most effective flashforward in all of cinema, emphasizing the straight-line connection between the apes and modern man by turning four million years of history into an "and then."

Remember that the Tycho monolith didn't actually do anything, unlike the original Africa monolith. It didn't change anybody who came into contact with it. All it did was send a radio signal out into space. It was humanity that chose to follow that signal. That was an even subtler nudge than the first one, but it was enough. We built a big-ass spaceship and fired it off into the dark just to see what was there. And what we found was a door, and we didn't just go "Huh, door," and go home. We opened it and barged through, because that's what humanity does.

Viewed through THAT lens, it's a powerfully humanist film. Look how awesome we are.

And thing is, I don't think either one of these interpretations is bogus. I think they're both totally valid. We can get into a debate about the artists' (Kubrick and Clarke) intent, but if we let the work stand on its own, it can be appreciated in many different ways, ways that totally contradict each other while remaining valid and internally consistent.

That's why I love "2001." That's why it's more than just a movie, to me. It's not merely a story, not merely a sequence of and-then events told in order using pictures and sound. It's a sort of open-ended meditation on what it means to be human. It's a springboard for thinking big ideas about existence and the cosmos, and that's fun. I don't necessarily mean to say I'd like to do nothing but that for the rest of my life — sometimes you'd really rather watch a pretty girl walk in slow motion away from an exploding gas station. But "2001" provides an opportunity to think about big, philosophical things, and sometimes that makes for a really fun time, and that's why I like it.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Resident Down in Front fly on the wall and assistant Matt tried something fancy this week, he put together a list of links to everything we talk about as we go through the movie, be it people, books, or essays.

I want feedback on this - do you guys think you'll ever be happy to have a list like this, or is it a waste of time?

Thanks, Matt!

2001: A Space Odyssey on IMDB
Buy 2001: A Space Odyssey at Amazon.com (no, we don't get kickbacks)

CinemaScope
The Making of Kubrick's 2001 on Amazon. Looks like it's out of print now, but this will get you going in the right direction.
The Lost Worlds of 2001 Same here, but, ya know, were trying.
Doug Trumbull
Dennis Muren

John Dykstra
This week's special guest, Serge Delpierre
Films with opening overtures
Stuart Freeborn
"Footprint in the snow" moment
Soundtrack on iTunes
Ray Lovejoy

Multiplane Camera
Parallax
Ballistic motion
Theremin, as used in film
RP, or Rear Projection as opposed to
Front Projection. Both are used in this film.
POV shot, or Point of View shot

Olivier Mourgue designed the chairs used in this scene.
The whole centrifugal (force) thing, as opposed to centripetal force.
Magic Beans
Pretty sure this is the cartoon Brian is talking about, several are on YouTube.
Space: 1999

George Lucas and our own Trey Stokes attended USC, just to name a few. You can too!
Instructions for the Zero Gravity Toilet
For the full story of the American journey to the Moon, we highly endorse the series From the Earth to the Moon.
The Apollo Program
Italian director, Giuseppe Bertolucci.

Roberto Benigni
The Discovery as one of Wired Magazines Five Awesomest TV and Movie Spaceships, in November 2008.  The Discovery is still cool people.
Arne Jacobsen created the flatware used in this scene.  It can still be purchased today.

The origin of HAL
GLaDOS from the valve game Portal
There are plenty of examples of the Discovery pod in Watto's junkyard, just Google it.
It's Damn Interesting just how long you can actually survive unprotected in outer space.

Why no widescreen?
Margaret Stackhouse's breakdown of 2001. Kubrick approved.
The Hero's Journey
Zeitgeist
Galilean Moons
Slit-scan photography
Some enterprising soul "unwrapped" the slit-scan from 2001.

Cinerama
What does it mean to be Human?

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

You're welcome Dave.

Just FYI, we're also working on a glossary of terms we've been using but not necessarily explaining in each episode, but at the moment it doesn't exist.  Don't panic, we promise, we're working on it.

Last edited by Matt Vayda (2010-02-16 19:22:14)

Re: 2001: A Space Odyssey

Jeffery Harrell wrote:

we don't understand enough about how computers could be built to declare that HAL is impossible.

Actually, we kinda do - or at least real-world experience has shown that a HAL-like computer is impractical, for many reasons.

There was a fascinating documentary years ago about a fella who had one of the most sophisticated computers available at the time and literally was trying to teach it to relate to humans as if it was human itself.   

The problem he was encountering that eventually sank the project was that there is just too much info that a human absorbs in order to function, and the most important stuff isn't data, it comes from the daily experience of being a human in the first place.   

So to get the computer to really understand the meaning of "my toe hurts", it had to understand what a toe is, and what hurting is, and why my toe isn't your toe, and the list goes on almost forever, just to get that one simple idea across.

More recently there were some experiments of putting a computer brain in a simulacrum of a body so that it COULD learn to move around and experience the world in a human sort of way, but these sorts of things are still very theoretical.

Which is why from a PRACTICAL standpoint we long ago abandoned the idea of one mega computer in favor of networked small computers that handle specific tasks.   So you might someday have A ship's computer that speaks to you in English and seems kind of human, but he probably wouldn't also be running your life support system.   That's just a bad design, as the Discovery crew learned too late.  smile

But as far as the neural net concept, yep, that's the scary thing about them.  They can learn to do what you want them to do, but you can never know what they're "thinking" as they do it.   Just as your mailman might seem friendly, but in his head he's actually planning where he's going to hide your body, because he's actually completely insane and only mimicing human behavior.   

Freaky stuff to contemplate... will there be a day when we let neural-net computers handle something critical, when we can't fully know why they're doing it, or what might make them  totally lose their sh*t?

Re: 2001: A Space Odyssey

Totally agreed that the one-computer-to-rule-them-all plot point was just a clear case of science-marches-on. I'm willing to let that slide, personally, for two reasons. First, I've accepted way worse strains on plausibility with the excuse that the work was created at another time. And second, well, there wouldn't be much of a story if Dave and Frank had just done a three-finger reboot of HAL and gone on with their day.

But as for the other point, about computers needing to know what toes are before they can be like people … yes. You're right, at least as far as I know. The human sensorium consists of an unthinkably vast quantity of data pouring into our minds every waking moment, starting from birth, and that's just not something we can emulate in a computer. How the brain manages to store a lifetime's worth of memories in such a small space while being powered only by candy bars is a total mystery, at least to me.

But on the other hand, I think one of the fundamental themes of the movie was that HAL wasn't human, or even a particularly good simulation of one. He spoke, yes, and was polite enough to say nice things to Dave about his sketches. But his actions and decisions during the film were totally INhuman. He showed no sense of morality, for the obvious reason that he had none. He had programmed instructions, and those were as the Word of Almighty God to him, and everything else was secondary. When it reached the point where he determined he had a better shot at achieving his programmed objective by killing off five human beings, he killed off five human beings. Or tried to, anyway. No "they had to die" speech, no horns-of-the-dilemma. Just a mathematical calculation. For all his fantastical attributes, HAL was still a machine, and the fact that he was a machine was an important story point.

Think about it this way: Imagine that instead of being a computer, HAL was another astronaut. The mission commander, or whatever. Give all of his decisions to a human character. Seems to me the second act would completely fall apart. No human being would do what HAL did without AT LEAST struggling with it a little bit. Well, no non-psychopath, anyway. Which I guess explains why HAL seems to be universally described as a "psychotic computer," despite the fact that that's pretty much a contradiction in terms.

One of the things I disliked about "2010" (and I say this as somebody who owns a copy of that movie, watches it regularly and enjoys it very much) is that they tried to redeem HAL. Oh, it wasn't his fault; he was just sick. I (a doctor, by the way) have made him better. The others are distrustful of him, but I trust him, and in the end I'll put our lives in his hands so we can have a dramatic pause after which HAL nobly sacrifices himself so that others may live.

Not a BAD story arc. Just not one that's really true to "2001." HAL was never Darth Vader. He was just a machine. He was the thighbone of an antelope. And like we humans tend to do, we built him without truly understanding him. It's (arguably; go with me here) a straight like from the antelope's thighbone to the atomic bomb; human history is wall-to-wall with cases where our ingenuity outraced our wisdom. HAL was just the latest in a long line of unforeseen consequences.

What was it, about a year ago … like an Onion article or something? About the government contractor that was trying to figure out how to make battlefield robots run on blood. We joke about it, but the truth is, that's not that far from the kind of things people have done all throughout history. We're curious and we're lazy and we're just clever enough to do things without thinking them all the way though; this is both our core weakness, and our unique strength.

Anyway. I've completely lost sight of my point here. Thanks for the opportunity to ramble pointlessly for a couple minutes. ;-)

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

I just alerted Serge to this discussion, maybe he'll weigh in on it. I'm finding hidden benefits in being on a podcast with listeners smarter than I am. This is interesting.

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Some great points being made here. Jeffery's bit about our imagination:

"We're curious and we're lazy and we're just clever enough to do things without thinking them all the way though; this is both our core weakness, and our unique strength."

reminded me of a similar point brought up Michael Crichton's Sphere.

I hope I am not spoiling the story for anyone here, but basically what happens is that the characters gain the ability to make what they imagine real.  There is a quote that goes something like:

"...if you imagine good things, you get delicious shrimp for dinner; if you imagine terrible things, you get monsters trying to kill you."

In the end, as the main protagonist, Norman, is inside the Sphere, he explains it (to himself) thusly:

"The ability to imagine, is the largest part of what you call intelligence.  You think the ability to imagine is merely a useful step on the way to solving a problem, or making something happen, but <i>imagining it</i>, is what makes it happen.  This is the gift of your species, and this is the danger, because you do not choose to control your imaginings; you imagine wonderful things, and you imagine terrible things, and you take no responsibility for the choice.  You say you have inside you both the power of good, and the power of evil, the angel and the devil, but in truth you have just one thing inside you, the ability to imagine."

I would go so far as to say that the ability to imagine is what the first monolith imparted to the apes on the plain, if indeed it imparted anything at all, but for the sake of discussion let us say it did.  One might wonder why it did not also impart a sense of morality e.g., use the antelope's thighbone to hunt for food, not to kill each other, but to be fair morality, particularly absolute morality, is a human construct, and indeed, another tool we imagined.

I am aware of the whole religion aspect with regard to morality, but I'm not going there except to point out that religion is also merely a tool we imagined.

Now, let us imagine the time between the first two monoliths as being a measure of our ability to imagine.  We find the second monolith, and it sends out a signal.  Nothing is imparted this time, but our imagination has developed to the point where, while we can imagine what is out there, we need to know for certain.  So we imagine ourselves a vessel, and we imagine ourselves a computer to run it.

At this point the trouble we face is, ironically, a lack of imagination.  In the beginning, we knew nothing, and then we gained the ability to imagine.  So we began to imagine things we did not know for certain; in essence this was the beginning of science.  This would start small, "I can imagine what is on the other side of that hill, but I need to know for certain."  So you set out with all these images zipping around your head, and then you reach the crest of that hill, and whatever you were imagining is replaced with what is actually there.  Add several hundred thousand years of evolution, and you get a story where we have developed our imagination to the point where what we are imagining can be realized; we can imagine a vessel that can take us to Jupiter, and a computer to run it, great!  And why? Because while we can imagine what is out there, we are not content with that; we need to know what is out there. 

The problem we have created for ourselves in HAL, is that our imagination has grown beyond our control.  In effect what happened with HAL, is the same thing that happened with Apollo 1.  In both cases we used our imagination and built something truly awesome, and in both cases we were let down by circumstances we had not imagined.

I have to wonder then, what the intelligence that created the monoliths would have made of us.  Would they see us as we see HAL?  Generally speaking HAL is regarded as evil (he is #13 villain on AFI's 100 years, 100 Heroes and Villains), and certainly no one will argue that killing his entire crew was a good thing, but to those of us who have studied the film, we generally understand that he is simply trying to complete the mission, and for HAL, the mission is the sole purpose for his existence.  While we may disagree with his methods, we also have to understand that HAL is a product of our imagination.  We could have imagined HAL with a sense of morality but why?  Who would have imagined a set of circumstances under which HAL would calculate that the best chance for the mission to succeed, and for him to eliminate the crew? 

We have reached a point in our imaginative evolution where our tools have become so complex that we no longer understand them fully; I doubt that any one person in that universe could accurately predict how HAL would react to any given set of circumstances.  We have effectively lost the ability to imagine situations and things that our imagination cannot grasp.  I really wish the film had touched on religion at some point, because in addition to moral value, another key purpose of religion is to explain the unexplainable, or at least help us come to terms with things we cannot understand.

Based solely on the film it is difficult to posit where the third monolith fits into this model, but I wonder if an argument cannot be made for Dave’s return to Earth as the Starchild, as messianic.  I do not mean to suggest that the creator’s of the monoliths are deities, but as has been pointed out in numerous commentaries, any sufficiently advanced technology can be regarded as magic, or if you prefer, the work of God.  We spend the entire film with characters that do not regard the world they live in as in any way extraordinary.  Even today that is true, and my argument is that while we have films like 2001 to remind us of the awesomeness of our world, the people that inhabit the world of 2001 need someone to come along and remind them.  Obviously there is no one inhabiting that universe that can do that, hence the need for someone to come along and point it out.  Dave experiences things he could not have imagined, he himself is re-imagined in a way, and sent back as a messenger of sorts.

Remember in school when the teacher would give you some paper, crayons, scissors, glue, etc., then leave you basically on your own for a while?  Most of the time these efforts were rewarded, and given a place of honor on the refrigerator, and every once in a while someone drew something that is got them a visit to the principal’s office.  Now imagine it a few years later, Mom has pulled out a bunch of them and you’re looking at them together, and suddenly you remember how much fun it was.  You remember how powerful your imagination was back then.

Now imagine that all of us are those children, and the Starchild is the mother. 
Now imagine that you get an idea.
Imagine that idea being Photoshop.

That’s 2001: A Space Odyssey.

Re: 2001: A Space Odyssey

Now that's an interesting angle on it. The whole "imagination" thing, I mean.

Imagination has always been a vaguely frightening thing for me. I don't know if I have an especially vivid imagination or just an average one or what — I have no idea how I'd compare mine to anybody else's to find out — but the notion that I can just conjure up detailed, totally realistic images and sounds and, like, smells and stuff in my head, SEEING them without actually seeing them … well, it's weird. It's always kind of freaked me out, in a way.

Talking about a movie in terms of the book on which it was based is always tricky, but doubly so in this case, because the screenplay and novel evolved in parallel. They're clearly meant to be the same story, but they're conspicuously not identical, and one can't help ascribing significance to that. But in any case, the book lays out this sort of back-story for the monolith-makers. It says that they searched the universe, and found the most precious and rare thing in it was mind. I'm pretty sure that's the word Clarke used: "mind." So wherever they went and whenever they could, they cultivated mind, and that's what they were doing with the monolith in Africa. They were turning apes into creatures-with-minds.

Which of course presupposes that all living things on earth except humans DON'T have minds, which is more of a philosophical statement than a provable fact, but whatever, that's the premise of the story. Either that novel or its sequel, I forget, paints a bigger picture, describing pre-sentient life both on Europa and in the atmosphere of Jupiter. Oh, that's right, it was "2010" that included that bit. Because it made a thing about how the monolith-makers made the choice to exterminate the Jovians because the Europans seemed like they were a better candidate for eventual sentience.

It's a great question, though, what the monolith-makers would think of HAL. Their backstory (vague though it is) implies that they started out as organic life, but through technology learned eventually to imprint their minds on the structure of spacetime itself. So would they see HAL as a new organism independent of us … or the next step in our evolution? And if it's the latter … would they welcome our eventual advancement to their level, or be threatened by us?

See, part of what I like so much about the film "2001" is that it leaves a lot of questions just totally unanswered, which appeals to me because the mystery and the possibility are sometimes more fun than the eventual revelation. It's like they say, nothing in scarier than when you're watching a horror movie and the screen just goes to black. Because what you're imagining in the dark is WAY scarier than anything the filmmakers could depict.

Well, I guess what I imagine (there we are back there again) the monolith to be all about, or the monolith-makers to be all about, or what lay beyond the monolith for Bowman, are all way cooler and more fun than anything that could ever be shown on a screen.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

To go back to HAL and his "evilness," one thing we didn't mention in the commentary was the concept of the Turing Test. We can debate back and forth whether or not HAL constitutes real sentience or just an approximation of it, but based on what we see in the film, HAL would pass the Turing Test and therefore, on some level, qualify as genuine intelligence.

Jeffrey Harrell wrote:

I think one of the fundamental themes of the movie was that HAL wasn't human, or even a particularly good simulation of one. He spoke, yes, and was polite enough to say nice things to Dave about his sketches. But his actions and decisions during the film were totally INhuman. He showed no sense of morality, for the obvious reason that he had none.

Wasn't he a good simulation of a human though? As in the District 9 commentary, I wish I could say I could find such behavior in humans impossible to believe. Sadly, such behavior is all too human. That's the precise definition of a psychopath or sociopath, after all; one who mimics social etiquette without the genuine emotion. And all children are born sociopaths, caring for nothing except themselves (and excellent manipulators to boot) until they are forced (hopefully) to learn genuine empathy.

In those ways at least, HAL IS the most human character in the whole thing.

Last edited by Brian (2010-03-03 19:09:20)

Re: 2001: A Space Odyssey

Hmmmmmm. Very interesting indeed. I think it revolves around how you choose to define "human." The Turing Test defined it in a walks-like-a-duck-quacks-like-a-duck way, and that's valid, and by that definition HAL was of course human. But if we draw some smaller circle around those aspects of humanity that we consider to be moral and civilized, then HAL might or might not qualify.

If you're going to set out to build a HAL, I think it's wise to decide up front whether you want to simulate a raw human being, with all the pros and cons of that, or whether you want to simulate the best of us. The trouble with that, of course, is that we're not even particularly good at fostering the best of us in us.

It's easy to imagine that an artificial intelligence will be "better" than us, because it won't have a limbic system. It won't get grumpy or jealous, won't have bad dreams or lose its temper. It'll be this pure reasoning thing, and obviously that's at the heart of what makes us special.

But such a being also could never feel compassion, at least not on an emotional level. Sure, maybe we could program it with some sort of "Asimov's Three Laws: The Long Form" to convince it to be good, but much of what we consider good and virtuous in people is decidedly irrational. So is what we consider to be evil and barbaric. How do we cultivate one to the exclusion of the other?

Which brings us right back around to "What does it mean to be human?" which is one of the great dilemmas of art and, especially, of science fiction.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

(Something else I wish I'd remembered to talk about in the commentary:)

It's entirely possible that it's flat out impossible to get human being level higher reasoning WITHOUT a limbic system. Because of Descartes, we tend to think of logical reasoning and emotion as two separate and distinct parts of our brain, or more colloquially, we picture ourselves as "thinking" with our brain and "feeling" with our heart.

Now, obviously we know that's literally not true and all thinking and feeling is done in the brain, but we still consider "thinking" and "feeling" as distinct modes from one another. That's certainly how science fiction has always treated the subject when it came to artificial intelligence.

But I read a book a few years back called "The First Idea," which proposes a model of human thought where higher reasoning develops out of simpler emotions. The authors suggest that as children we start off with very simplified emotional states - we're either happy or absolutely inconsolable, with no in between. They also suggest that we also have a limited ability to have a sustained interaction with another human being.  You get so many moments of give and take before the baby's brain resets like a goldfish. But as we get older, the chain of moments we're capable of doing gets longer - you can carry on a "conversation" for longer and as a result, develop more nuanced emotional states.

Instead of only being able to be happy or inconsolable, you're now capable of being inconsolable, angry, frustrated, and annoyed. Eventually, some threshold of self awareness gets crossed and you're capable of saying, "I know I'm annoyed." And eventually, "Why am I annoyed?" which leads inevitably to, "How do I fix my being annoyed?" And we've arrived at higher reasoning.

It seems to make sense to me, though I'm obviously no clinical psychologist and it does destroy a hundred years of science fiction robot writing, including my beloved Data. And there's some interesting crossover between this idea and the fundamental principles behind acting - or at least a certain part of it - that I can't quite articulate just yet.

Re: 2001: A Space Odyssey

Hmm. We're really talking about several different things, aren't we? I'm neither a psychologist nor especially knowledgeable about AI theory, but it seems like we could break down "humanness" into broad chunks and treat them separately.

Take problem solving, for instance. Computers are already pretty good at solving any problem we can rigidly define. Like chess, say.

But we're just hitting that part of the problem with a hammer. We can build a computer that can beat a grand master chess … er … guy, but it costs a million bucks and only works by examining every possible outcome for every possible move. The human player on the other side of the board uses intuition and still wins half the time.

Data always struck me as Deep Blue writ large. He was astonishingly good at solving "mechanical" problems, and his knowledge base was so vast that lots of problems could be boiled down to mechanical terms. But he couldn't whistle, which was a metaphor for that ineffable whatever-it-is that makes humans human.

Maybe you're right (or that book you read, or whatever) in that our reason is secondary to our emotions. It would make sense, biologically. Everybody who's ever owned a pet knows that "lower" animals have emotions, or at least they behave like they do. But very few "lower" animals use tools or solve problems in ways we recognize, and those that do are still pretty shit at chess. Those losers.

It's kind of a depressing thought, isn't it? We're only sapient because we're emotional and irrational, but that emotionality and irrationality means we're doomed always to struggle against ourselves.

Which brings us back to the idea of transcendence, I guess, which was another bighuge theme of "2001." (Hey, look, we're talking about a movie here!)

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Nice work on this, guys.

Gold medal Super-win! for Trey for mentioning Space Food Sticks, the greatest chocolate rubber compound invented by mankind.

I find it interesting—and a problem with the film—that the discussion here is all about HAL.

HAL’s dilemma is that the humans are planning to turn him off, which would jeopardize the mission, so HAL decides to get rid of the humans and carry out the mission himself. The problem is that that is actually entirely incidental to the story Kubrick is telling (the ascent of mankind) and merely serves as an entertaining complication. After all we are not introduced to computers or tools earlier in the film as a kind of villain-of-our-own-making. But it ends up being so entertaining, that it takes over the movie.

Warning: I'm probably rewriting this post as you read it.

Zarban's House of Commentaries

Re: 2001: A Space Odyssey

Best newspaper-headline-style summary of "2001" I ever heard:

'Astronaut battles computer, becomes giant baby'

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

I remember reading in the TV guide when i was a kid "man travels to jupiter and meets himself." Talk about people still not getting it 10yrs later.

I listened to this again last night and just wanted to belatedly say how good it is. Trey's enthusiasm, the guys pointing out a lot of details, etc. I love it when Teague, who straight up never liked the movie, asks "why". It's one thing for people to say "it's boring, i hate it" but when the discussion is "there's a reason people love this thing, i want to know explain", it's interesting. Then he has an epiphany! smile

And the great comments above. The only thing i can add is when Brian talks at one point about how the spinning gravity thing might not work if the cylinder's too small because G would be lower at your head... That doesn't sound right. Seems to me the direction of force goes through your body like an arrow from the top, and pushes the whole body down. Who knows...

And great show notes. The Kubrick and slit scan links alone are interesting reads. The cute ending left a smile on my face. A great commentary.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Thanks sir. smile

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

It's been a while since I listened to this one (need to put it back in the queue), but assuming that's what Brian really said, he's not technically wrong, but he's partly wrong. The force exerted against the inside of a spinning drum is, in fact, a function of the radius of the drum. But from what I understand, that's not really why a small-radius centrifuge wouldn't work. I've read that it'd fail because of a combination of balance issues and Coriolis effects. If the radius of the drum is small in proportion to the size of your body, then the force pulling on your balance organs in your inner ears would be sufficiently non-parallel for it to really foul up your sense of balance. And any freely falling object inside the drum would appear to fall in a huge curve, which would also work toward upsetting your self of balance.

All in all, it'd be upchuck paradise, is how I've heard it described. All things considered, better to just stick with zero G for a trip of less than a couple years.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

beldar wrote:

The only thing i can add is when Brian talks at one point about how the spinning gravity thing might not work if the cylinder's too small because G would be lower at your head... That doesn't sound right. Seems to me the direction of force goes through your body like an arrow from the top, and pushes the whole body down. Who knows...

It's been awhile since I've listened to this commentary, but the point I was probably making was that in a rotating chamber with a small enough circumference, the weight will be noticeably different at the floor than at head level, which will screw with your sense of balance.

Re: 2001: A Space Odyssey

There's a book I haven't read in years, like since-developing-taste-ago, but considering it's written by Eric Idle it's probably still excellent. The Road to Mars. As I remember it, some comedians travel a solar-system wide vaudevillian circuit while their android tries to write his thesis and figure out the nature of comedy. Deals with as much of the "human/robo wuaaa?" as you can read into it while remaining really enjoyable and funny.


Oh, and this down here is the reason for the post, but didn't want to just drop a bullshit image link into an interesting conversation.

http://twitpic.com/2wuzsv

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Sorry Brian, but I have to agree with this guy on the matter...


http://www.confusedmatthew.com/2001%3A- … dyssey.php

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

I made it halfway into the second part before turning it off. I don't disagree with what seems to be his only observation, but he kind of annoyed the shit out of me and made my ears hurt.

If at some point he makes it clear that he knows the whole subtext bit about the monoliths, then, yeah, I'd be able to have a non-contentious conversation with him about the movie. I just don't like his style. (And by the time I turned it off, he hadn't seemed to indicate that he knew what was going on with the monoliths and the general metaphor of it.)

Teague Chrystie

I have a tendency to fix your typos.

Thumbs up Thumbs down

Re: 2001: A Space Odyssey

Wow. I've always felt Kubrick often didn't have a very good handle on story (the beginning of The Shining is hilariously ham-fisted), but this guy is completely off the rails. Does he go to art galleries and shout at paintings? "Landscape! Landscape! Landscape! Tell me a story!! This. Is. Nothing!!"

EDIT: Weird. Got punbot questions ("Which Star Wars movie is best?" which I apparently failed twice, then "What is the name of our planet?"), then the end of my post got chopped off. Was the repetition throwing up a red flag?

Last edited by Zarban (2011-05-10 00:10:29)

Warning: I'm probably rewriting this post as you read it.

Zarban's House of Commentaries