Tuesday, March 31, 2009
Dysconnect, March 2009: Decoding Recoding [reviewing reviewing]
A couple of months ago, music reviewing reached crisis point. I wouldn’t blame you for having missed it – it barely made a sound. Or nobody was listening. Or everyone had their headphones on. And there was so much background noise.
What happened? While the desks of music editors the world over were (are!) overflowing with promo CDs nobody can be bothered listening to, people at home are deleting music… only slightly slower than they’re downloading it. Meanwhile, online websites continue to publish so-called reviews: sometimes carefully written, mostly barely skimmed, and, almost invariably now, with only the most tenuous relation between the rating and the reality.
A now infamous example of this is the 9.6 that Pitchfork doled out for Animal Collective’s new album. During a backyard ping-pong match the other week, I asked members of three of Melbourne’s indie bands – the kind of bands who, shall we say, might feasibly get an eight-point-something-or-other on said site – what they thought about the rating. Standing incredulous atop a damp underpile of envy was Mr Disbelief: how, indeed, is it possible for anyone to get a 9.6? Somehow, they agreed, 9.6 was much more than even the ‘five stars’ a top-rated film might receive. The new AC album was good, but… 9.6 had something impossible about it, something parodic. ‘Cept no-one was taking the piss. Pitchfork really meant it… they even thought it was defensible. No, more than that: they thought it was a 9.6.
The reasons for the seemingly inexorable climb toward the banality of near-perfect scores is doubtless more complex than the following, which nonetheless needs re-stating: a) music writers glamouring/clamouring to be the first to call ‘Album of the Year’ (by mid-February, if possible); b) music writers attempting to accumulate social capital by upping the impact factor/hit count for themselves and the site/blog they’re writing for by churning out an attention-grabbing review; c) the kind of aesthetic circle-jerking that transpires when the music in question is only reviewed by people whose cool is intimately bound to loving or hating the recording under review; d) the fact that reviewing is usually the ‘job’ (if, indeed, anyone is getting paid to do it) given to entry-level wannabes who, after struggling and failing to write a measured review that retains some nuance and ambivalence, opt for odes and boost the fuck out of whatever it is; e) the fact that writing a compelling review that nails the recording is actually really, really, really difficult. Especially if the conclusions reached in the course of writing mean that the rating ends up having to be three stars.
Accelerating, undermining or otherwise affecting all the above factors is the datasea: reviewers, drowning in mostly worthless promos, are expected to make watertight judgement calls about recordings they’ve only had time to listen to a few times (because Tuesday is already a week ago, and now there are already nine other new albums vying for your attention). The effect on the receiving end of the datasea (with its daily tsunamis microwaving their way toward a desktop near you) is that most people don’t take the time to read reviews carefully, negating the rewards of investing careful consideration into nuance. So what’s a reviewer to do? Tip, boost, high-five, move on, repeat.
I’m one of those old-fashioned people who believe that a culture of carefully considered, trustworthy reviews is something worth retaining, and I don’t think this has anything to do with the kind of vulgar ‘gatekeepers of high culture’ arguments I’ve heard advanced against the current situation. This is not least of all because there is no High Culture anymore: tastes and audiences have fragmented and multiplied to the nth degree, while there is less than ever before (in terms of cost and distribution) keeping you from accessing and enjoying your preferred peculiarity.
What to do then? If you accept that there is something desperately wrong with the state of music reviews, and you believe that there is a place for key sites and writers to filter and evaluate new releases, then consider the following…
1) Descriptive Previews: these short pieces work on the basis of impression, are primarily illustrative, and presume that the audience hasn’t heard the recording. They are only intended to provide a clear idea about what the recording actually sounds like (and not just to a first-year lit. major), and make no attempt to rate or otherwise evaluate the music in question. Descriptive previewing is a matter of decoding the recording.
2) Critical Reviews: these longer pieces make evaluative judgments about subjective importance or merit. They should place the recording in the broader context in which it was released, and address the general critical reception of the recording by its audience. One of the key functions of the critical review would be corrective, by up- or downgrading the impressions made in earlier previews, correcting ambient hype levels (now that the tipster apparatus has peddled its fixed-gear bicycle to the next cool thing) and rescuing poorly received, unheard or generally ignored releases from obscurity. Critical reviewing is a matter of recoding the recording.
In a way, the above suggestion has got the same whiff of futility as the ‘legislating against capitalism’ rhetoric you see Western politicians engaging in at the moment in order to appear clear, strong, and decisive in the midst of successive spasms of economic crisis that no-one has control over… but like politicians, we tend to stridently overestimate our ability to control things while timidly underestimating our ability to influence things. Being grand, saying a sovereign no, making definitive statements and issuing ultimatums… all completely useless. But they’re all actually much easier than slowing down, tuning in, and trying to give a fuck about what is being said about what has been heard. If only we took the time to write carefully, to read carefully and to foster this culture of warm engagement (as opposed to the dominant culture of cool disengagement), we’d all be making an infinitesimal difference, and that adds up. But not to a 9.6. Maybe a 4? Or a B-? Let me spend some time with it, and I’ll get back to you with something more precise.