Call me cliché, spineless, or unoriginal, but I actually like reading and watching music reviews online. It’s often a great way to get another experienced perspective on an album. Additionally, I can see how an element of the album that didn’t work for me may have worked for someone else. The substantive, subjective parts of a music review are interesting to read – but what the hell is up with reviewers assigning numbered scores to music?
I mean, I get why reviewers do it. It’s “clickbait-y.” It provides a super easy and digestible product for readers. It can signal quality to those who rely on reviewers to drive their music taste. Readers don’t have to actually read a review to understand how a reviewer feels.
But here’s the issue: how can anyone feel that their experience with a piece of music is objective enough to slap a number on it? Writing a think-piece on a song or album makes sense; in that case, the reviewer has the chance to explain how they connected (or didn’t connect) with the music, or what the artist did (or didn’t do) well. But when reviewers choose to attach a number to their thought-piece – and then display that number at the top of the review – it seems to erase subjectivity. It insinuates that their opinion is objective, instilling a direct level of quality in the reader’s mind. Rather than allowing the reader to listen to the project, form their own opinions, and then read the review for more insight, assigning numerical values creates unnecessary expectations in an attempt to mold the reader’s opinion to fit the writer’s.
And even beyond the unnecessary attempt to attach objectivity into a subjective field, there’s the issue of comparability. Some albums are directly comparable: they feature similar styles, influences, and sounds. I can understand an attempt to rank albums in this scenario; ranking still seems pointless to me, but at least it’s possible. But some albums cannot be compared to others in any way, shape, or form: the artistic direction the albums are trying to take are too divergent to even be thought about together. This is the medium that nearly all music reviewers operate in. In a perfect world, we would see reviews like, “this album is a 5.4 for post-808’s and Heartbreak hip-hop,” or, “this record is a 8.3 in Gorillaz’ discography.” Instead, there is a direct attempt to compare, rank, and score wildly different musical projects against each other – a task that simply doesn’t make sense.
courtesy of Pitchfork courtesy of Pitchfork courtesy of Pitchfork
Let’s pick on Pitchfork for a second. On its “50 Best Albums of 2018” list, the number one spot goes to Mitski’s Be the Cowboy (rated 8.8). Great – I’m obsessed with this project and have been bumping it non-stop for the past two weeks. Scrolling down, you can find Die Lit, Playboi Carti’s debut studio album, at number 25 (rated 8.5). Then, Pitchfork ranked The Internet’s latest album, Hive Mind, at number 39 (rated 8.3). I do not understand, in any way, how these three albums can possibly be ranked or rated against each other. Be the Cowboy displays a crystal-clear indie sound, meandering verses, and an abandonment of traditional song structure. The album surrounds Mitski’s intense feelings of isolation, loneliness, and depression. Die Lit is a fairly straightforward album with minimalist production, solid hooks, and slick trap beats. Hive Mind is a genre-bending display of classic RnB, head-bopping funk-inspired bass lines, and dreamily synth-heavy neo-soul influences. The projects represent three totally different genres, three totally different artists, three totally different objectives and directions, and are made for three totally different audiences.
Music reviewers offer a great service: solid analysis, background into the artists’ life, and another perspective to help understand a work of music. However, when music reviewers assign a numerical score to a project, they attempt to create objectivity in a relatively subjective field and compare projects