Books | Art | Critical Theory | Music | New York






meme n.
"A unit of cultural information, such as a cultural practice or idea, that is transmitted verbally or by repeated action from one mind to another."

I came across this word and was thinking about its chance for a reemergence in our blog-growing world. The field of memetics studies cultural evolution as it relates to ideas, and they will eventually have a great deal to talk about once we get some distance from the present explosion of cultural exchange.

A 1976 book written by Richard Dawkins coined the word, though it appeared as “mneme” in a 1904 book by a German evolutionary biologist. Dawkins took the lead and married the word to the biological “gene,” as in a unit of many which constitutes a larger body—in genes, the biological code, in memes, the collected experience of culture.

In the mid 90s, right in the midst of our Internet explosion, two non-academics (a Microsoft executive and a mathematician/philosopher) started exploring the implications of the idea in light of our age of growing information. They arrived at an important question: is a meme a unit of information inside of a brain, or is it an external cultural artifact? Internal, or external?

Today, blogs allow an enlarged pipeline for cultural information to travel—the ability for word-of-mouth ideas to appear consistently enough to become cultural ideas. The evolution of a meme is like a biological gene: as it is passed from one place to another, ideas are included and left behind, and each transfer changes the spin, like a complex game of “telephone.”

Has the chance for any average citizen to host a blog increased the speed of our cultural evolution by increasing the speed of our cultural exchange? It used to be possible to argue that the Internet is full of institutionalized information, but now anybody can publish. And the cultural exchange, which used to be theorized because it was mostly unrecorded day-to-day exchanges, is happening now on the Internet, in a relatively concrete form.

In the blogosphere, a meme almost borders the line between being an internal unit in the brain, and an external cultural artifact. Richard Dawkins called them “units in the brain” because that’s where he hypothesized the evolution taking place: in a person’s mind, where it was then reintroduced into the cultural landscape. But with blogs, there is so little boundary between personal thinking and world-wide exposure via the internet that the internal is easily externalized. The distinction between these two ideas depended on the lengthy and difficult process of personal thinking making its way into the public, collective consciousness. In our day, this process is not so difficult at all, and the blog community has the potential to thrust personal ideas into external, culture-wide ideas. Into bonafide memes.

I'd like to thank Wikipedia for providing all information pertinent to this blog. It feels like everything is at one's fingertips.



Time magazine recently chose the top 100 novels written in English since 1923, the year the magazine started. There has been a fair bit of derision (and conspiracy theorizing) from bloggers about Time’s “authoritative” choices, which are of course quite subjective.

Richard Lacayo, one of the two critics who formed the list, noted that they had to disclude a few writers who they really liked, but who had written better short stories than novels (Flannery O’Connor, Donald Barthelme)—this sort of problem is addressed well in a Guardian article about stuffing literature into categories and then drawing up qualifications according to those categories, and why we shouldn’t do it.

This clever idea is the funniest way of responding that I’ve yet read. Who need professional critics? Go democratic opinion!



I woke up this morning rather groggy, threw off the comforter to wake myself up, then promptly scrambled to re-cover myself as I remembered that our landlord has yet to turn on the heat. "It's okay," I thought to my half-awake self. "I'll scramble out of bed, grab my towel, and hop into the hot shower." So I ran into the bathroom, threw on the water, and shivered for awhile. I noticed that the bathroom window was open, so I kicked it closed. I felt the water. "Max, did you already take a shower?"
"Have a look at my face. Does it look like I've showered recently?"
"Why isn't the hot water working?"
"I don't know."
"I'll make coffee." I did, then I laid out my clothes, packed my lunch, did everything possible to expedite the post-shower routine. Ten minutes later, I dunked my head under the sink, shaved with cold water, and cursed the fact that our realty company is "devoted to providing low-cost housing to prevent homelessness." Oh, and it's raining, and I had to watch my bus pull away while I was trapped on the other side of the street with a no-walk signal.

The thing is, I'm actually in a good mood. We saw Wolf Parade last night, and somehow ended up in the second row. It was ecstatic and very affecting, and I found out they have two lead singers.

The weekend was fast, interesting: we saw Lou Reed play to a tiny crowd for the opening of Canal Park in Tribeca (thanks to a message from a visiting friend--that's happened more than once, when a friend who comes is the one that tells us about where we are supposed to be--see free Clap Your Hands Say Yeah show), I got a cheap, mod-looking haircut at a salon school, went to Brooklyn and bought a winter coat at Beacon's Closet, saw Godard's Band of Outsiders, which has the finest and cleverest ending that I've seen in a long time, went to an office party at a bar and had free drinking, where I met somebody who grew up three blocks away from my house.

I feel like we never had an autumn: where were the nice trees? It's gone from hot to cold, and humid all the way through: 94% humidity today, 53 degrees. And it never stops raining. I press on, use my free umbrella, and remind myself that it's really fantastic to be alive.



Overheard at: 57th and 6th, M31 Bus

Christian #1: "Did you hear about that new Hurricane? God's showing us something, I know it."
Christian #2: "And they're still gonna have Mardi Gras this year in New Orleans."
Christian #1: "Man! I should have known. Those sorts of things always go on, come Hell or high water."
Christian #2: "Yeah. I think it's gonna be both in this case."



We got the second half of Scorsese's No Direction Home and watched it immediately. It was frustrating having a two day-gap between both halves (Netflix, while fantastic, does have the problem of turnaround between movies: even with the mail taking one day and Netflix sending something out immediately upon receipt of the movie, you're looking at getting something on the third day after you send the first one). I was left with Dylan's enigmatic persona hanging around in my mind in the interim, and while I thought the second part would allay my Dylan-infused psychological state, I was wrong. In fact, the gift Scorsese has given us is an honest, rare look at both the private and public personas of the singer, and how alike they were, and how infectious they are.

Dylan was always acting, and the footage of how he circumvents the asinine reporter's questions is priceless. When he's backstage with friends, though, you see that it's mostly the same; balancing the limelight with a private self, he was capable of existing in a highly public sphere while, somehow, remaining in complete, disarming control of his self. It's hard to describe, but the point is that he puts on this persona in which nobody can tell him how to act. It's all a sort of coy, shy act. He revels in ambiguity and contradicting himself, and nurtures awkward silences like he's tending to a garden. In it he is able to remain untouched by the stupidity of crowds who booed his electric music, by reporters who ask him how many other singers also "toil" in the musical vineyard of his art. "How many?" he asks. "Yes, how many would you say." "I'd say there are about 136." "136. Now, is that an estimate, or is that exact?" "Well. I think it's either 136, or 142." In his evasiveness he continually exposes those interviewers who don't know his music, and who aren't asking questions, but suggesting pre-conceived answers.

Just recently, a bunch of people got angry that Dylan chose to distribute Live at the Gaslight 1962 only at Starbucks, that corporate American giant who overruns poor coffee farmers around the world (and who really needs to quit overroasting its beans. My God, give us a medium-bodied coffee!) Speculation abounds, much of it that Dylan just wants to clash with peoples' viewpoints of him as a person. The man will do anything to avoid labels.

Allen Ginsberg had a great bit about describing Dylan's persona during that early 1960s time: he called him a Shaman, a man who had channeled his very breath as an output of his entire persona, his entire artistic goal, his whole consciousness--singular. And in the singular, if it is a whole personality, there is contradiction.

The Guardian has especially great articles about the show, as well as some republished stuff from 1966.








Are they really this clever? No, but this is way too much fun.



I had a cliche moment today when I pulled up the curtains and was blinded by an open, smiling blue sky. Nevertheless, I am amazed at the power of weather to make one forget everything and remember that the world holds surprises. It has not stopped raining for 8 days, until this morning. I skipped down the stairs with bedhead and flip flops, into the French market to buy tomatoes. Families were smiling and yelling and buying pastries. There was a general poetic climax of happiness in the air. Not happiness, maybe. I think a better word is joy.

I’m interested in the difference between the two. On Thursday we saw Antony and the Johnsons at Carnegie Hall. His songs are not happy, they’re achingly sad and beautiful, emotional and honest. In between songs at the show he was delightful and forthcoming, telling jokes and inviting the audience to sing along to his cover of Shania Twain while he told of a vision in the clouds when he met her.

He mentioned more than once how the number of sad songs we could handle was slowly diminishing, so he’d better hurry it up so as not to indulge us too much. Clearly it’s the only kind of song he knows how to sing honestly. But then he said something interesting, that he does all he can to nurture a sense of joy. It was right before singing “For Today I am a Boy,” a song which expresses such a sense of impossible desire, of overcoming physical biology and time, that I feel like crying every time I hear it. Lots of people cry during his songs, but what I realized during the concert, which came alive through his personality, was that it’s not the sadness that makes one cry; amidst the melancholy and lost hopes there is a cultivated, gently nurtured sense of joy. In this nurturing is the artistic struggle, in this the songs are saved from self pity. Joy and happiness are not the same thing, and joy, perhaps, is best seen amidst sadness. Joy requires for its definition unsatisfied desire, which itself becomes a state which is desired more than any other satisfaction. It is a belief in Timbuktu, or Arcadia, or whatever one wants to name that place beyond us that would be a place of home and peace and understanding. One doesn’t even have to believe it actually exists, maybe, and joy is not the same thing as faith, but one must certainly hope for something. Antony’s album is called I am a Bird Now and he repeatedly sings about becoming a bird and taking flight; this is impossible and it’s why he can remain joyful. There is equal parts grief and hope, grief for the impossiblity of this place, but hope for it despite. The ways in which he incorporates this hope for impossibility into gender, how his voice and persona performs in an androgynous, otherworldly space, is enchanting.

For the encore Lou Reed came out and played the Velvet Underground song “Candy Says” while Antony sang. It was a treat and a perfect end.



Somebody left an anonymous comment about the post regarding Google and copyright info, so I thought that I would respond. They brought up a few issues:

1) The end of the public library: since we can just “google it” to get any information we’d like, what’s the point?
2) Assuming the worst outcome for the first question, then there will be people who don’t have access to the Internet, and therefore cannot access important information.
3) How will we tell what is good information, and what is bad? Where is the quality control?
4) An artist might gain exposure from the wide audience that comes from the Internet, but what happens when that audience doesn’t want to pay for the art, making the assumption that it’s their “right” to access it?

So. I’d like to combine questions 1 and 2 into a single theme: with all this information, who says what’s good and bad? For example, I'm suspicious of self-publishing houses which allow anybody to publish and distribute a book (with very light editing help from the company), because there’s no quality control. On the one hand, it’s excellent for somebody to be able to ignore the "institution" of publishing which likely rejects a large number of talented writers because they aren’t commercially viable, or in alignment with what the publishing house believes, or whatever the case may be—in true utopian spirit, the idea bypasses bureaucracy (kind of) to allow free expression. But there’s a reason that I like a lot of the same books from certain publishing houses, or even certain editors: because they’re hand-picked. I know that if a book has gone so far as to be published, I can trust that it’s worth reading on some level. Editors provide an important service.

The public library comes out of this same quality control, since they simply stock the books that have already been approved for publishing. They also provide access to databases that are indexes of “approved” information, like encyclopedias, or an index of journal articles. And librarians curate their collections to make them as helpful as possible. In all these cases, the information has been hand-picked to some extent, or given a stamp of approval.

The same applies to sorting through the heaps of information in the Internet. I was talking about this with my father last week, and something he said was interesting and deceptively simple: while there is no longer a lack of access to information, there remains a premium on good thinking. To which I added, discernment. It may seem simple, but the ability to rapidly categorize information into helpful and unhelpful mental “piles” will be essential in a digital world of information. When something seems off about a website, one has to smell the rat and move on. There is a gray area that is emerging between institutionalized, “approved” content, and crap that anybody can make public: there is a lot to be had, understood, and gained from this gray area, and these qualities of good thinking and discernment have to be cultivated to make it useful: the ability to filter and predict and prioritize. All these things have always been qualities important to good research. And I do really believe that it’s useful, because there is a real possibility for free expression in this no-mans-land. Blogs, obviously, are a huge resource in this area as they begin to collect links and prioritize and make recommendations. The whole thing becomes a self-checking, better-by-democratic-opinion sort of web.

In response to the second question about people who can’t access the Internet, or can’t afford a computer: this is certainly valid. However, consider public libraries, which provide Internet access for free, and which I don’t think will ever be closed anyway: the important of print resources, beyond accessibility, is clear in the way that they provide checks and balances. One thing that’s scary about the Internet is the way that all information is abstract, can be untraceably altered, is not concrete and solid. Real books provide a check to that abstract information. With all information stored in one centralized place, as a professor of mine once noted when our university decided to destroy all Art Historical periodicals because they were available in an online database, one flirts with definitions of fascism. This is an extreme position, but it’s a point worth making. The opportunity for difference, I think, lies in the fact that all that centralized information is not controlled by one institution, but rather it is stored in widespread locations with lots of people checking and balancing. Consider Wikipedia, the online encyclopedia which anybody can edit: on the whole, it is full of true, useful, well organized information. It is democracy at its best. (It’s also very interesting to take a look at the deletion log, which tracks the rapid and innumerable ways in which the encyclopedia is being altered).

Finally, the original source for this debate, and the party who has valid reason to object to all of this: artists. Sure, it’s good for an artist to have exposure, but with free information, how are they to collect on the sort of information that is creative? How are we to distinguish the information which is merely factual from the information which is artistic, of a wholly different value? I’m using a loose definition of information, but in this arena, it works: if something can be digitized, it is reducible to its simplest combination of ones and zeros, of binary code. Purely information.

To answer this we have to be specific about what sort of art we are talking about. For musicians and filmmakers, the process of digitizing their work is quite simple, and has already been done to limitless extent with music in mp3 formats. So that kind of art is easily distributed. But I think it’s quite fair to say that, at least for the independent musician, the vast circulation provided by the Internet has been a benefit, as I mentioned in the first post. Film, I suppose we would have to see. Books? To me, having paper in front of me is never comparable to reading on a screen, and I disagree that the next step is creating an Ipod for print. Word on paper have been around far longer than film or recorded music, both of which are more recent inventions. I find it hard to envision a world without books. And as long as they are physical things, actual artifacts, then the people who wrote them can make a living from their sale.

When we think about the fine arts, it gets a little trickier. Surely painting and sculpture are necessarily perceived “in person” as actual objects, at least in their classical sense. But people have been making conceptual art for over three decades, which shed the usual belief that the hand of the artist is a gifted commodity, selling the idea of what they’re doing, almost like a patent. Warhol made mass-produced objects, and called his studio a factory. Today people have begun making explicitly “digital art” which is meant to be experienced at a computer screen, who are embracing the new technologies. Even if one thinks that’s not true art, we will always have painters and sculptures who make work that doesn’t make sense unless it’s seen in person. One cannot digitize a work of art's "presence" and distribute it around the world.

So to say that an artist will starve if nobody pays them for their art, while a compelling point, needs to be examined a bit more closely. We have to come right down to the specifics of an artwork and how it transfers to the digital realm. Music is the most easily transferable, and really, I think the benefits independent artists have gained from the Internet’s circulation outweighs the money lost by some recording artists to file sharing. It’s not like people aren’t willing to pay for the music, anyway: Itunes has been a massive success. And those recording artists who object so adamantly aren’t starving. Nowhere do we hear complaints about file-sharing from small bands, at least not that I’ve heard, or from independent musicians who are, in my opinion, making the music that matters. Embracing the abilities of technology is the best solution: in the end, those that object are fighting what is, of course, a losing battle.



The air never turned dry this weekend, carrying a misty overhang that made it feel as if the clouds were a low ceiling. Didn't really leave the apartment except to venture out and attempt to watch a performance of avant-garde music beneath abstract film, but it sold out on us. The trek to Chelsea is frustrating enough. But then Nick noticed that Lou Reed was waiting with us in the lobby. The mist turned into a mysterious aura, and we walked home in half-silence through Flatiron, up 5th Ave., which is when we wished we had flip-flops and had been drinking.

Saturday I had another go at "The Five Obstructions," a film which deconstructs the concept of cinema by documenting Lars von Trier's various "obstructions" of remakes of a 1967 short film by Danish director Jorgen Leth, entitled "The Perfect Human." Along the way it unravels normal modes of thinking, psychology, the concept of humanity, and the personality of Jorgen Leth. I would really recommend renting it, with a friend who you could talk with afterwards, preferably. Upon second viewing I found that, as whole film, it lost a good bit of meaning after the shock-value of the first viewing was lost. Nevertheless, the quality of the individual short films was more apparent, once I could let the conceit of the whole thing be secondary. It's still a really mind-blowing film altogether. What emerged for me was the complete superiority of Jorgen Leth over Lars Von Trier, insofar as they are defined by their characteristics in the film: Lars is strangely self-satisfied in his psychological gamesmanship and vaguely annoying, while Jorgen has this aura of artistic integrity, of authentic humanity and its triumph, and responds to the overly-cerebral challenge with plain old good art.

Saturday brought a trek through the pouring rain in shirt and tie to the West Village, where we hid in a dive jazz bar and saw an overweight man who called himself the Reverend play piano. He was a bit like a religious Tom Waits, minus the outsider-wanderer aesthetic, plus a heap of sentimentality and earnestness. He had us all holding hands by the third set, metphorically picking out our "burdens" out of the muscles in the backs of our necks, throwing them on the ground, and stomping on them. It was great fun.

Sunday I payed bills and got depressed, but then we made this Asian soup with udon noodles and pot stickers. Yum. After a visit from Dad last week which included a spree at Food Emporium, we've found ourselves with strange amounts of food and ingredients which we normally can't afford. Here is a picture of the filet-mignon dinner we made Saturday night, with a merlot reduction sauce with shallots on top. (Ed. Note, 10/11: See here for a full review)


It's getting ridiculous how much effort we put into cooking (and, apparently, taking pictures of it. See facebook). But it's something to throw oneself into. I want to get a book on the science behind technique, start to academicize it.

It's Monday night and everything is finally beginning to become dry. I feel like I passed over today without noticing. The days are getting physically shorter, but I also wonder how long I can deal with coming home and realizing it's midnight before I can begin to think. I sit in a chair all day and sit mostly idle--I read as much as possible to keep my mind alive, but there's something about the pallor of an office that itself feels like a low, heavy ceiling. Mostly it's because I don't have a real job with real resonsiblities, so I'm only as occupied as are the people who supervise me need help. But I can't write while I'm there, not really, and I can only read so many publications so thoroughly--The Times, The New Yorker, the Voice, NYMetro, Pitchfork, everybody's blog twice (hey all of you--post!). There is always 20 questions, which is still astounding (am I the last person to hear about this thing? My first one was muffler, and it nailed me in 17).

I imagine that this is what it feels like to evaporate. It's a feeling of weightlessness, of being unsure of one's present state and losing hold of it, to feel inconsequential yet lofty, changing forms, living a life in the mind. It's a transition. I sound dramatic. It's true, though.



I recently read Michael Chabon’s book Wonder Boys, on which one of my favorite, most cherished films was based. As a movie it’s a slyly uplifting story about a failing writing professor who comes to realize one of his students is a strikingly talented writer, however much he may succumb to petty thievery, excessive brooding, and compulsive storytelling (what the less romantic of us might call lying). Amidst a failing life, which culminates in a series of genuinely hilarious strokes of bad luck and whimsical coincidences, Grady Tripp is able to lose everything he thought worthwhile while gaining what he never knew to be important. It’s a collegiate film set in a town of professors with nostalgic cars, who use typewriters, relive old movies, and generally live in a universe in which aesthetic consequence is far weightier than actual consequence. In fact, the reckless main character never quite pays for his actions, not really, and it’s mostly solved because he concludes at the end that he “knows where he wants to go” as a writer.

The book is quite different in tone, if only because of its lack of a happy, pat ending. It’s more satirical, sharper, less warm. James Leer, the student, is not a genius writer, and actually writes rather complicated, screwed-up, lacerated prose that’s nearly impossible to read because of its excessive and ill-placed punctuation. The various farcical instances of the novel have a different sort of humor which isn’t laced with lightness or as much absurdity, as in the movie. It cuts deeper, it’s more satirical, and it has consequence. There’s something of a macabre edge to the novel’s world, which is complemented by the narrator’s continual allusions of his life circumstances to the short stories of August Van Zorn, an obscure writer who descended out of Edgar Allen Poe.

One of the best parts about the book is the way Grady Tripp’s character relates himself to his literary interests, including his own characters. At WordFest, the weekend event which is the backdrop to the whole book and film, there is a professor who gives a lecture entitled “The Writer as Doppelgänger”, referring to the idea of a shadow self which follows one around causing mischief, e.g. see Peter Pan. Having what Grady calls the “midnight disease,” many writers find that their literary aspirations and characters become doppelgängers, as they gradually lose hold of the line between the physical and fictional world, always suffering the quintessential fates of their characters. After awhile, a writer confuses reality with dreams, or himself with his characters, or the random happenings of his life with the machinations of a plot. The line between fiction and reality becomes overrun.

This aspect of the novel changes it completely, I think, from the movie. I’m not sure which one I like better--the movie is darkly funny and familiar and disarmingly touching. The book has a darker edge to it, a shadow which is the doppelganger and the midnight disease, and a realization that real consequences happen and that one must face them, and that they set the artistic consequences into better, clearer relief.


About me

  • Blake
  • Chicago, IL, United States



Powered by Blogger
Check Page Rank