Tim’s Vermeer

Camera Obscura

Last week I attended a screening of Penn & Teller’s new documentary Tim’s Vermeer at the Coolidge Corner Theatre as part of their Talk Cinema series.  This screening (and the discussion that followed) was hosted by Boston Globe film critic Ty Burr.

As Burr warned us before the movie began, there would be a lot to talk about.  He was right.  But let’s take a look at the film first.

The film follows the inventor and engineer Tim Jenison as he attempts to recreate The Music Lesson by Johannes Vermeer using optical devices and mirrors that he believes (along with artist David Hockney and art historian Philip Steadman) Vermeer must have used to obtain the photorealism present in his paintings.  There has always been a mystery surrounding Vermeer’s work, especially the fact that there are no signs below his paint that he was working from sketches.  Was he simply able to paint photorealistic paintings from memory?  Could he have had superior eyesight that would allow him to capture visual anomalies in his work that are normally hidden to the naked eye?

There has been a tendency throughout art history to romanticize Vermeer as a genius but never attempt to understand why he was a genius.  This is exactly what Tim sets out to understand in his experiment (which the film traces from conception to conclusion).  Though Tim is enthusiastic about proving his theory, there has been hesitation in academic circles to accept the theory popularized by Hockney and Steadman that Vermeer was aided by optics.  The reason for this is the increasingly outdated belief that the worth of an artwork is dependent on the amount of traditional skill and effort used to produce the piece.  Even though abstract and conceptual art have been dominant in the artworld for well over a century, this belief persists.  It still accounts for negative reactions that some people have toward art they do not understand, exemplified by the common reaction, “My kid could paint that.”  So why is it that people seem so unwilling to see technology as a useful aid to artists and not a dirty trick or a cheat?

Teller addresses this issue in an interview with The Village Voice, which Ty Burr also quoted from during our discussion:

I blame it on academia. Academics very often don’t have to do the art that they write about. They also don’t have to make a living from the art that they write or teach about. So I believe they’ve never gotten their feet wet, their hands dirty, and said, “OK, how would I go about making a painting that I would sell to support my family?” If you talk to real artists who actually produce things, they’re not woofty. They don’t view artists as supernatural beings who just walk up to a canvas and paint with light. They use whatever tools they can to achieve the effect, because the important idea is to get the idea that’s in your heart to the heart of someone else.

What I noticed at the screening of the film at the Coolidge was that the audience was overwhelmingly open to accepting Vermeer in these terms.  When Ty Burr asked whether the use of technology should change our view of Vermeer as an artist, or if the technology is a “cheat” and makes Vermeer “lazy,” the audience responded “of course not.”  They reiterated a point made in the film that even Renaissance artists were aided by technology (e.g., the algorithm behind perspective) in their effort to increase the level of realism in their works.  It is a point I have often made in defending electronic music against accusations that the artists are not using “real” instruments: if that is your belief, you do not understand the meaning of the word “instrument.”  What Tim Jenison proves in the film is that there is still a lot of skill and effort involved in creating and manipulating the technology that one may use to create art.

That said, I am still not entirely comfortable judging an artwork based solely on the quantification of skill and effort supposedly put into it.  I would hope that the finished artwork ultimately matters more than the methods used in its creation.  Duchamp’s Fountain is still an important work, regardless of how “easy” it was for him to throw together.  Leaving Vermeer’s painting skills and use of technology aside for a moment, his paintings are still miracles of composition that can be appreciated aesthetically as masterpieces of 17th century Dutch art.  In other words, I think we can look at Vermeer as a proto-camera and judge his paintings by the same standards by which we now judge photographic art.  However, if your appreciation of these works is dependent on a romantic conception of Vermeer as a man struggling with just his brush and without the aid of any other tools to achieve his artistic goals, I would suggest that you are only appreciating a mere expenditure of energy and not necessarily the actual paintings.

With Vermeer’s legacy safe, at least among my fellow audience members, Ty Burr asked: “Is Tim an artist?”  One woman answered “no,” because Tim produced a copy of an already existent work.  I find it hard to argue with that point.  But I would like to add that, based on what we see of Tim’s methods in the film, he certainly can be an artist if he were to apply himself toward the creation of original works and submit them to the artworld for evaluation.  (Actually, the film asks what I find to be a more provocative question: “Is Tim an inventor or an artist, or is that distinction important?”)

Finally, “Is the film an artwork?”  In the same interview referenced above, Teller talks about the process of finding the film’s story from the 2,400 hours of footage that was shot:

I like that term, “narativizing.” It’s exactly right because, in real life, you don’t know the story of your day. If you get to the end of the day, and you get to your diary entry, you know what the story of your day was. We had four years of undifferentiated human experience that included a lot of technical stuff, a lot of funny stuff, a lot of dull stuff, and we had to go into that and say, “What is the core of the story?”

In finding the narrative, the form of a story within the chaos of footage, Teller, narrator Penn Jillette, and editor Patrick Sheffield clearly create a work of art.  The story is smart, moving, and funny, and it is scored elegantly by composer Conrad Pope.  The filmmakers even utilize Lightwave, a technology created by Tim’s company, to craft illustrative animations, proving that artists today are still using whatever means necessary to make the best art they possibly can.  Not only is Tim’s Vermeer such an artwork, it is also one of the standout documentaries of the year.

In closing, despite recent attempts by people like Leon Wieseltier to keep science and the humanities separate, as if the humanities were somehow threatened by science and technology, the relationship between science and art remains a fruitful one.  This film, and the work of Vermeer at its heart, are a testament to that.

Further reading:

Advertisements

Leviathan

Leviathan

The first thing we notice is the noise: loud machinery, clanking metal, grinding chains.  Then we catch abstract glimpses of the moving parts—and, for brief seconds, the sight of the dark ocean crashing below.  But we can’t seem to catch our bearings.  The camera is purposefully disorienting us, unsettling us.  And it only gets worse from this point forward.

The soundtrack will soon give way to the wet scaly slaps of dying fish, the rattle of cracked shells, the gurgles of submersion, and the prehistoric calls of ravenous gulls.  The visuals will move somewhat rhythmically between machines and flesh, metal and viscera.  (One may easily be reminded of mid-90s Nine Inch Nails music videos.) This is Leviathan, a captivating documentary by Lucien Castaing-Taylor and Verena Paravel of the Sensory Ethnography Lab at Harvard University.

In regard to theme, narrative, or even setting, we have no firm footing.  We are on a fishing vessel, but we might as well be on another planet.  The voices of the crew sound alien.  Their faces are the only evidence that they are human.  And they are our only respite from the dripping blood, the dancing fish heads, the bulging eyeballs.  Indeed, the animals look horrifically distorted and bloated, like demons out of Hieronymus Bosch.  The aforementioned birds, in flight against the black sky, recall both the Ride of the Valkyries from Wagner and the flight of dancing spirits in the Night on Bald Mountain sequence of Disney’s Fantasia.  This should give you an idea of the film’s overall tone, as neither reference supplies much comfort.

Leviathan opens with an epigraph from the Book of Job, and it ends with a credit reel that lists the scientific names of the depicted species.  The significance of these details, if any, is left for the viewer to decide.  Some have read Leviathan as a parable about the viciousness of humanity against the environment, which it rapes and wastes with abandon, its hulking fishing vessels being construed as the true “Leviathan” of the title.  There is perhaps good evidence to support this reading.  However, I think that the film is better experienced with no such narrative in mind.  It should be felt viscerally, like a psychological horror movie that creeps under your skin like botfly larvae.  As already mentioned, it uses frequent disorienting cinematographical effects typical of films in that genre, and the audio track embodies the very essence of foreboding disquiet. On top of this, a few scenes of systematic butchering are certainly unnerving for anyone who has seen slasher films like The Texas Chainsaw Massacre.

A close relative to Leviathan is Werner Herzog’s Lessons of Darkness, a film that presents Kuwaiti oil fires as alien phenomena.  Both films offer us an alternative view of the world we think we know so well, and both make no attempt to shield us from the horror that runs so close to the surface of all that we do, breaching it here and there like starfish limbs through a fish net.  But Leviathan does it better.  It’s truly an astonishing and unforgettable work.  Let it wash over you; let it nauseate you and stir up your unconscious fears.  Maybe you’ll enjoy it as much as I did.

Further reading:

In Defense of Heresy in Criticism

Full English Breakfast

Once a week, Criticwire asks a group of film critics a question and compiles their responses.  This week’s Criticwire Survey seems to have caused a bit of a stir.  Here is the question posed by Matt Singer:

What movie widely regarded as a cinematic masterpiece do you dislike (or maybe even hate)?

This question and its responses were promoted under the incendiary headline: “Overrated Masterpieces.”  Needless to say, this provoked some outrage, both in the comments and across the web.  Only one critic, Glenn Kenny, appears to have left the proceedings unscathed.  The reason for this is that he refused to name a film:

I find this question especially dispiriting, as it’s really just a form of bait, and a cue for individuals to come up with objects to snicker at, feel superior to, and all that. I’m sure many critics will have a blast with it.

Kenny follows this with a passage from Richard Hell’s autobiography where Hell writes of an encounter with Susan Sontag in which she laments the fact that she has opinions because, as Hell puts it, “opinions will solidify into prejudices that substitute for perception.”

On Twitter, New York Times critic A. O. Scott singled out Kenny for praise:

watch @Glenn__Kenny enlist Susan Sontag and Richard Hell to smack down glib link-trolling pseudo-contrarianism

First of all, I would argue that Kenny himself is using this opportunity to “snicker at” and “feel superior to” his fellow critics.  Second, I would argue that the point of this particular survey is to counter popular opinions that may have solidified into prejudices, not the other way around.  Finally, I think that it is Scott who is being “glib” in his dismissal of the exercise as “pseudo-contrarianism.”

Each individual critic (Kenny included) will have points of divergence from the critical community with which he or she belongs.  This is only natural; individuals have individual tastes (e.g., likes and dislikes) based on individual life experiences.  But here is an unsettling fact: many people will accept that certain films are sacred—sometimes irrationally and without having actually seen them—for the single reason that the films have been blessed with critical approval and labeled masterpieces.  The critics who answered the Criticwire Survey are simply challenging this automatic acceptance, some even going so far as to offer rational and articulate defenses of their opinions (the opposite of pseudo-contrarianism, I would say).

Interestingly, James Ramsden, a food blogger at The Guardian, wrote a piece last week called “The Great British fry-up: it’s a national disgrace.”  The article comes with the following blurb:

The full English breakfast is the most overrated of British dishes – even the name is shuddersome. How did we become shackled to this fried fiasco?

Just as with the Criticwire Survey (and perhaps again due to the word “overrated”), Ramsden experienced a lot of backlash.  He felt compelled to write a response (published only a day after the Criticwire Survey): “Which well-loved foods do you hate?”  In this piece, we learn that Ramsden received accusations similar to those received by the film critics.  For example, he, too, was accused of trolling (maybe by the A. O. Scott of the British food blogging world).  However, Ramsden understands where the attacks are coming from:

I understand it because I’ve felt it too. It is perhaps not a rational reaction to a subjective aversion […], but we feel strongly about food and are thus oddly offended by someone vehemently opposing that which we cherish.

Yes, and people apparently feel strongly about film as well and will oppose subjective aversions to well-loved films with equal vehemence and irrationality.  Ramsden, after providing a long list of similar aversions from some notable chefs and food critics, ends his piece by stating:

The common denominator with all of these dislikes is the mutual conviction that the other person is a loon, even a heretic. There are certain aversions – anchovies, haggis, balut, kidneys – that are entirely understandable (you don’t often hear cries of “you don’t like kimchi?!” except perhaps in certain foodish circles), but when it comes to dissing curry, fish and chips, pasta, or indeed a fry-up, it turns out people are, at best, going to think you very odd indeed. Still, can’t blame a man for trying.

Glenn Kenny chose not to name a film on which his opinion differs from that of the masses.  Does that mean he holds no such opinion?  That no such film exists?  Hardly.  As I said, he used this opportunity to elevate himself above his fellow critics under the pretense that criticism has loftier goals than this sort of muckraking.  I think that he just didn’t want to get his hands dirty.  I prefer the “loons” and the “heretics” who are unafraid of their own subjectivity.  On a related note, I believe that Pauline Kael would have loved this week’s Criticwire Survey.  Especially the word “overrated.”

Further reading:

Hume, Kael, and the Role of Subjectivity in Criticism

A Defense of Banksy

Dancers on a Plane by Jasper Johns

Once again, I feel compelled to address some claims made by the art critic Jonathan Jones at The Guardian.  This time, Jones has written a piece attacking Banksy.  This in itself is not the problem.  The problem is that the attack makes very little sense under close examination.

Here is the crux of Jones’s argument:

Some art can exist just as well in silence and obscurity as on the pages of newspapers. The Mona Lisa is always being talked about, but even if no one ever again concocted a headline about this roughly 510-year-old painting it would still be as great. The same is true of real modern art. A Jasper Johns painting of a network of diagonal marks surrounded by cutlery stuck to the frame, called Dancers On a Plane – currently in an exhibition at the Barbican – was just as real, vital and profound when it was hidden away in the Tate stores as it is under the gallery lights. Johns does not need fame to be an artist; he does not even need an audience. He just is an artist, and would be if no one knew about him. Banksy is not an artist in that authentic way.

I strongly disagree that art can exist in a vacuum; I think it needs an audience to be art.  Thus, I cannot fathom the absurdity in the statement that Jasper Johns “does not even need an audience” to be an artist.  How does that work exactly?  It doesn’t.  Jones is simply presupposing a metaphysical reality in which art possesses inherent value independent of humans.  This presupposition, being fictional, remains unsupported.  How can a work remain profound if no one is around to bestow the value of profundity upon it?  And does it not take a human mind to transform Jasper Johns’s “network of diagonal marks surrounded by cutlery stuck to the frame” into a cohesive whole?  Truly, then, one cannot dismiss Banksy on the grounds that his work demands an audience.  All art does.

Another problem that I have with Jones’s argument is that he takes the properties that make Banksy aesthetically interesting to most people and transforms them into Banksy’s aesthetic shortcomings:

Banksy, as an artist, stops existing when there is no news about him. Right now he is a story once again, because a “mural” by him (street art and graffiti no longer suffice to describe his pricey works) has been removed from a wall and put up for auction. Next week the story will be forgotten, and so will Banksy – until the next time he becomes a headline.

Part of Banksy’s “art” is in the impermanence of his pieces and in the confrontational nature of his “murals” that are designed to disrupt people from their daily routines to make them stop and notice something, to see things differently.  Perhaps comparisons to static pieces like the Mona Lisa are not the best means to understand performance-based work of this nature (though I admit that because the art market has laid claim to Banksy, such comparisons are not necessarily off base, either).

But “street art” is hardly the first recognized art form to be temporary and confrontational in the manner adopted by Banksy. And why does Jones consider fame and branding as faults or weaknesses of the artist?  These attributes were obviously as essential in solidifying the legacies of the artists whom Jones admires as they were in elevating Banksy above his peers.

Jones claims that he wants “art that is physically and intellectually and emotionally real.”  Unfortunately for him, as his blog on Banksy makes clear, he seems to have no idea what that even means.

Further reading:

Banksy goes AWOL

On Morality in Criticism

Zero Dark Thirty

An interesting question has been making the rounds in certain critical circles since the release of Kathryn Bigelow’s Zero Dark Thirty this past December.  And I’m not talking about the question of whether or not the film endorses torture (it doesn’t).  I’m talking about the broader question that has been phrased this way by Danny Bowes at Movie Mezzanine:

[…] is a critic under any obligation to render a moral judgment on a film?

After pointing out that the debate extends beyond Zero Dark Thirty to films like Django Unchained and Beasts of the Southern Wild, Bowes states:

With each of these films, critics praising the aesthetics of each have been accused of ignoring, rationalizing, or even siding with offensive content therein. In response, critics have been forced into a “no I do not” defensive posture, and a great deal of huffiness about art for art’s sake and the primacy of the work over the given critic’s personal beliefs and austere objectivity and so forth has ensued.

In the past, I would have agreed with the l’art pour l’art critics who claim that they can separate their personal beliefs from their aesthetic evaluations of a given film and adopt an “objective” or an “impersonal” position from which to judge the work in question.  But not anymore.  Indeed, it is my understanding that an aesthetic judgment is inseparable from a moral judgment, and vice versa.  I think that Bowes agrees:

Every act of criticism is a moral judgment, and not in a glib, media-trolling, mid-’60s Jean-Luc Godard way, either. However objective any critic tries to be in evaluating any work, the evaluation is being conducted by a matrix of observation, cognition, and the innately unique assembly of life experience and education that makes up all the things the critic knows and how s/he knows them.

Yes.  Each person who makes an aesthetic judgment on a work of art cannot escape his or her “unique assembly of life experience and education,” and this assembly includes a person’s adopted morality.  Thus, I cannot consciously separate my moral leanings from my critical evaluations of artworks any more than I can separate my aesthetic taste from my moral judgments, no matter how hard I might try to hide the influence of one over the other.  As the character Bill Haydon says in regard to his treason in Tinker Tailor Soldier Spy, “It was an aesthetic choice as much as a moral one.”

Bowes writes at the end of his piece:

The decision a critic makes to approach a movie on its own terms with as much objectivity as s/he can muster is a moral decision. Not everyone succeeds in completely divesting their preexisting baggage.

Not exactly.  I would say that no one succeeds in this and that the morality present in a work of criticism is never a “decision” but inevitable.  In addition, we can never really know the multitude of factors that have brought us to our critical assessments (factors as disparate as temperature, mood, and peer pressure), so how can we choose to ignore some while allowing for others?  We can’t.

In Daybreak, Friedrich Nietzsche writes:

You dislike him and present many grounds for this dislike—but I believe only in your dislike, not in your grounds!  You flatter yourself in your own eyes when you suggest to yourself and to me that what has happened through instinct is the result of a process of reasoning. (D358)

Though criticism remains our best attempt to account for our likes and dislikes, we must recognize the limitations of the undertaking (e.g., the fact that it might just be a post-hoc rationalization of a knee-jerk judgment).  And we must stop pretending that we can consciously control what influences our opinions and what doesn’t, whether it be our moral conditioning, environmental factors, or something else entirely.  The best we can do is be honest regarding the extent of our knowledge in this area.  In most cases it will be minimal.

Further reading:

5 Bizarre Factors That Secretly Influence Your Opinions

Video Games Are Art

Smithsonian American Art Museum

I had wanted to write about video games as art for some time now, but I was worried that the question was no longer relevant–that most people (including me) had finally accepted the fact that video games can be art.  This past November, Disney released Wreck-It Ralph, a film which brings to life video game characters and worlds in the manner of Pixar’s Toy Story.  In his review of the film in The New York Times, A. O. Scott writes:

The secret to its success is a genuine enthusiasm for the creative potential of games, a willingness to take them seriously without descending into nerdy pomposity.

Clearly, I thought, this means that we’ve reached a turning point–that critics like A. O. Scott are now on board and willing to accept the aesthetic potential of games.

But I was wrong.  On November 30, Jonathan Jones, the art critic at The Guardian, published a blog entitled “Sorry MoMA, video games are not art.”  His blog is a response to the fact that the Museum of Modern Art in New York plans to curate a selection of video games as part of its Architecture and Design collection.  Despite the fact that this is not the first time that an art museum will be playing host to video games (the Smithsonian American Art Museum held such an exhibit earlier this year), Jones has decided to put his foot down and play the predictable role of arbiter of what is and isn’t art (the role once famously played by Roger Ebert in this particular debate).  He writes:

Walk around the Museum of Modern Art, look at those masterpieces it holds by Picasso and Jackson Pollock, and what you are seeing is a series of personal visions. A work of art is one person’s reaction to life. Any definition of art that robs it of this inner response by a human creator is a worthless definition. Art may be made with a paintbrush or selected as a ready-made, but it has to be an act of personal imagination.

Whether through ignorance or idiocy, Jones has made an argument that is simply not applicable to video games.  If he were to watch the great documentary from this year on the subject of independent game design, Indie Game: The Movie, he would realize that he has no right to claim that video games are not the work of personal imaginations.  In that film, we see just how personal games can be to their creators.  We watch Phil Fish, for example, as he obsesses endlessly over every detail of his game FEZ, postponing its scheduled release for years and revealing how much of himself is in the game–how it has become his identity.  We also watch Edmund McMillen and Tommy Refenes as they complete Super Meat Boy, an ode to their childhood video gaming experiences. From the Wikipedia synopsis of the film:

McMillen talks about his lifelong goal of communicating to others through his work.  He goes on to talk about his 2008 game Aether that chronicles his childhood feelings of loneliness, nervousness, and fear of abandonment.

Surely this suggests the extent to which games can be the works of personal imagination.  Another film playing the festival circuit this past year, From Nothing, Something, a documentary about the creative process, also features a video game designer among its artist subjects: Jason Rohrer, who “programs, designs, and scores” his games “entirely by himself.”  It does not get more personal than that.

And this is not even limited to independent game design (a field which Jones might not even know exists).  Surely the games of Nintendo’s Shigeru Miyamoto are recognizable as products of that creator’s personal vision.  Through pioneering works such as Donkey Kong, Super Mario Bros., and The Legend of Zelda, Miyamoto became one of the first auteurs of game design.

Regardless, Jones ends his argument against video games as art by making a point about chess:

Chess is a great game, but even the finest chess player in the world isn’t an artist. She is a chess player. Artistry may have gone into the design of the chess pieces. But the game of chess itself is not art nor does it generate art — it is just a game.

Jones’s use of chess to illustrate his case against the aesthetic value of games is interesting because he writes about the game in a previous blog titled “Checkmates: how artists fell in love with chess.”  In this piece, he doesn’t necessarily call chess art (he seems content to assign it the role of muse), but he comes awfully close:

It is a game that creates an imaginative world, with powerful “characters”: this must be why artists were inspired to create designer chess sets long before modern times.

On top of this, Jones seems willing to concede the fact that chess pieces can be art.  Would he also concede the fact that pixelated characters, orchestral scores, and other “pieces” of a video game can be art?  (To be sure, there are clearly “traditional” artists who work on individual aspects of games: graphic designers, writers, and musicians.)  My question would then become:  Why cannot the many artistic pieces cohere into a single work of art that also happens to be a game?  Architects create buildings that serve as works of art as well as living spaces.  Imagine an art critic who would perhaps recognize the artistry in a stained glass window yet say condescendingly of the cathedral in which it is found: “It’s just a building.”  The idea is absurd.

I am all in favor of meaningful distinctions between objects.  We can have art and games as separate categories.  But we must acknowledge that there can indeed be overlap.  I already demonstrated on this blog how food can serve both instrumental and aesthetic ends.  The same is true for games.

In his classic essay “The Artworld,” Arthur Danto writes:

To see something as art requires something the eye cannot descry — an atmosphere of artistic theory, a knowledge of the history of art: an artworld.

The fact of the matter is that video games have now been allowed into two respected art museums (the Smithsonian American Art Museum and the Museum of Modern Art), the National Endowment for the Arts has started to allow funding for game designers, and the conversation about the artistic merits of games is alive and well–within the general populace, yes, but also within the hallowed halls of academia.  This is enough, in my opinion, to qualify video games as art.  Clearly, in practice, that is simply what they are.  Psychologically, people are experiencing them in the same way that they experience objects more commonly classified as art (e.g., novels and movies).  The fact that critics such as Jonathan Jones and Roger Ebert will not allow for the status of art to be extended to games–and that they would rely on smug and silly arguments to prove their points–says more about them than it does about the reality of the situation.  They are great critics, but here, where perhaps they feel their grasp loosening around that which they believed themselves to be experts, they are simply wrong.  We see some metaphysical justifications for their beliefs, but primarily we see the constricting influence of habit and conditioning–their inability to see other than what they have been trained (or educated) to see.  But no matter.  Others seem to have a much easier time seeing the artistic potential of games.

In an interview with USA Today about composing the theme song for the game Call of Duty: Black Ops II, Trent Reznor says:

I’ve watched with a kind of wary eye how gaming has progressed. I was there at the beginning with Pong in the arcade, and a lot of my great childhood memories were around a Tempest machine. I really looked at gaming as a real art form that is able to take a machine and turn it into something that is a challenging, human interaction puzzle game strategy.

And according to Penn Jillette (from the November 18 episode of his Penn’s Sunday School podcast):

Video games are culture; they are a new way of doing art.  You know, I fought against them at first.  I used to say that, you know, being able to make up a story as you went, I fought against that.  I did a couple of whole speeches about how you want the plot in Shakespeare.  But I’ve now understood.

And so have I.  The more interesting questions, moving forward, are: “By what criteria are people recognizing games as art?  By what standards of taste are these games being critiqued?”  As Luke Cuddy puts it in his review of the book The Art of Video Games in The Journal of Aesthetics and Art Criticism:

We must remember to compare the good to the bad, the same way we compare Foucault’s Pendulum (Umberto Eco, 1988) to The Da Vinci Code (Dan Brown, 2003).

So what are the best games?  What are the worst?  What distinguishes them from each other?  I will leave those questions to the more experienced gamers and critics.

Further reading:

Prometheus: “There Is Nothing in the Desert, and No Man Needs Nothing”

Please note that the following post may contain spoilers.

Ridley Scott’s Prometheus is chilling science fiction, a Lovecraftian space odyssey that poses some big questions about the origin of life and its ultimate purpose.  David Denby has called it “a metaphysical ‘Boo!’ movie.”  Andrew O’Hehir compared it to Terrence Malick’s The Tree of Life:

Both are mightily impressive spectacles that will maybe, kinda, blow your mind, en route to a hip-deep swamp of pseudo-Christian religiosity.

I want to counter those claims by demonstrating that, though characters in the film may have faith in something beyond the material world, the film itself (mostly through the android David) depicts a world incompatible with that faith.

The film opens with a humanoid on what is presumably primordial earth.  A spaceship is seen in the distance, apparently abandoning him.  He drinks something from a cup and begins to disintegrate.  His genetic material, we’re led to believe, helped spawn life on earth.  Thus, we’re immediately given the film’s premise: an alien race “engineered” humans through this initial act of terraforming.  This premise, quite naturally, invites skepticism.  Even if an alien race did spark life on earth, there is no way that they could have predicted the paths that this life would take.  There is no way that they would have been able to engineer the many happy accidents that allowed a branch from this seed to evolve into humans.  Later, we will meet a biologist among the crew of the spaceship Prometheus.  He knows how life evolved on earth and voices his skepticism at the idea that we were somehow designed.  How does the script handle this contradiction?  It renders the biologist irrelevant, as nothing more than a cowardly stock character.  But skepticism hardly matters; we have already seen the creation of life on earth, so we must accept this premise, believable or not, as a fact in the world of the film.

This brings us to our protagonist, archaeologist Elizabeth Shaw. She (along with boyfriend Charlie Holloway) is the one who uncovered the cave paintings supporting the theory of extraterrestrial parentage.  The mission of the Prometheus, we learn, is to find our alien ancestors and ask them why they created us.  The assumption, of course, is that there is a meaning to human life, a reason for us being here.  And this meaning, according to Shaw, is out there among the stars for us to discover.  She wears her faith in this idea like a virtue; she also wears a cross.

But Shaw isn’t the only one who has a religious worldview at stake.  Even Peter Weyland (the sinister corporate interest who is funding the mission) expresses faith in metaphysical gobbledygook when he says that David, his android creation, differs from humans in that he does not possess a “soul.”

In a character analysis at the blog Virtual Borderland, the author writes:

We are told that David is different from humans because he has no soul — but is the trick really that David knows humans don’t either? Where humans pretend that they are different, that we have creators with answers to our questions, gods who will elevate us above the rest of the universe, David accepts the empty desert and the trick is simply: not minding that it hurts.

I agree with this analysis, and I think it is a key to understanding David’s function in the film and his obsession with Lawrence of Arabia.  His fondness for the David Lean film is particularly fascinating.   He even attempts to mimic Peter O’Toole through his appearance and mannerisms.  In this ability to learn through experience and observation and to mimic the behavior of model figures, David is perhaps more human than the other characters can comfortably realize, despite his lack of a “soul.”  As the author of the character analysis suggests, maybe David differs most from humans in that  he can accept the meaninglessness of existence.  For example, David knows all too well why he was created:

DAVID:  Why do you think your people made me?

HOLLOWAY:  We made you because we could.

DAVID:  Can you imagine how disappointing it would be for you to hear the same thing from your creator?

In exchanges such as this, David perfectly undermines the metaphysical delusions of his companions.

So what of Shaw’s faith?  What does it mean in this context?  As I already discussed, we are shown the creation of life right at the start, so we at least know that Shaw’s theory of extraterrestrial parentage is correct (absurd as it is).  We then see Shaw and Holloway uncover physical evidence to support their claim (cave paintings around the world that depict giant figures pointing to a specific star system).  People are reasonably skeptical, but rather than argue with the strength of their evidence, Shaw relies on a typical religious defense: “It’s what I choose to believe.”  She clearly possesses a metaphysical bent; she demands a meaning for her life outside of her own making, and as I said earlier, she wears her faith in this objective value like a virtue.  But the manner in which life was created, designed, or engineered is depicted as a material process–not a spiritual one.

Thus, Shaw can accept her theory of extraterrestrial parentage without the need of a metaphysical foundation for this belief.  She has data that supports it (including strong DNA evidence), even if it goes against the established body of scientific data.  So her conviction and her cross are peculiar affects, much like Captain Janek’s Christmas tree (a cultural symbol that survives through habit and custom).  What’s even more interesting is that Shaw does not discard her faith at the film’s end, even after she exclaims quite exuberantly: “We were so wrong.”  She requests her cross back from David, who had removed it earlier.  He asks: “Even after all this, you still believe, don’t you?”  It’s a valid point.  How can we take Shaw seriously as a scientist if she is so willing to turn a blind eye to all that she has just witnessed?  We are left silently snickering at this all-too-human foible, just as David mocks it in his own special way.

So Prometheus does not support a metaphysical outlook, even if its characters adopt one.  As Jim Emerson points out:  “Not unlike Star Trek V: The Final Frontier, Prometheus uses god as a MacGuffin.”  Furthermore, David the android serves as the perfect foil to the humans and their odd beliefs.  Toward the end of the film, on the brink of death, Weyland declares: “There is nothing.”  “I know,” David responds with appropriate coldness.  “Have a pleasant journey, Mr. Weyland.”

Further reading: