Thursday, 8 August 2013

Numerics, Symbols, Stars & Sliders: What Is The Perfect Rating System?

 

A large portion of music followers (99.9% large; not quite full force, but close) I imagine will agree when I say a rating is one of the utmost lasting - if not the lasting - factor in any written or spoken review. It's what readers come for at the end of the day: a denoted value or evaluative conclusion on just how good or bad a given piece of creative medium, is. Whether or not you care for the details or the [un]necessary name-drop of artists and tracks that share maybe one common familiarity with the given subject, it's the score readers are interested in. It's the responsibility of writers and critics (professional or not) to take that as a means to offer objective surveillance, but in a subjective point of view. But further to that, in the case of the final score, it gives readers a breadth of an idea into how such offering - in this case, an album - stacks up when taken with the countless ten-k of releases comprising one solitary year. But while ratings and critics' symbolized conclusions are the very determining factor to an album's representation, could they themselves be rightly criticized under the very same objectivity on quality and appeal? For years I've followed more than a dozen different critic collectives and online odysseys that offer themselves (or at least try to) as fair, honest and eager-eyed writers. But not all of them, rightly so, follow the same textual or symbolic representation when announcing (or denouncing as the case may be) a particular album's offering. So my question is this: is there such a thing as the best rating system.

I ask this because my feelings are that not all ways in representing an album's quality by a set value or amount of pictorials, does justice to actually getting to the heart of a given record's finer traits, flaws and skills. And while the end value and constant may translate the same across all rating systems, what people don't realize is that some of these, I fear, risk falling short of the very thing they're meant to stand for: a sense of thoroughness and detail. Star ratings, or any other kind of visual mark, are a worry simply because they're too narrow. Take Q Magazine - everybody's top-notch source for completely non-bias, non-patriotic breadth in music know-how - as a fine example. Five whole stars is the top rating, one being the lowest. Let's present these ratings, as I'm sure the magazine will agree on, through a textual equivalent. One star: don't bother with this. Two star: poor to most, challenging to the hard-core. Three-star: So-so, fair, average. Four-star: Great, definitely worth checking out. Five star: brilliant, flawless, no faults, 100% top quality. Can you see the problem with this? The jump from average to great is by a count of one measly golden addition of a star. What if an album is just good? What if it's neither average nor great, but fitting that of a record that's enjoyable, just not enthralling? Does that not exist here? And while I'm not going to turn this into an established critic-bashing, a lot of commercial releases (British based not surprisingly) get the 4/5 star rating. So...what...they're all great albums; not good, but great...definitely at the very least enjoyable? Oh, and what about them on occasion 5-star ratings huh? Am I to assume that if it's higher than great, that by default, it's classed as flawless? Am I to take from this you've found absolutely zero indecision or misjudgment (no matter how minor) in the record front-to-back; every second is something listeners will experience with wide eyes, locked-on ears and with a beaming ecstasy of joy plastered across their face?

Let's take number scoring for starters. The simple 0-10 measure is the most common, recognizable uniformity in marking how well an album has faired. But let's look more closely at the repercussions in simply limiting an end score to that of whole numbers. Marking music down as an 8 is fair, but what about all that space comprising the leap up to the next marker, the supposedly excellent, superbly brilliant tag a 9 represents? Are we to simply assume albums should just be shoe-horned into ten (or eleven if you really want to include 0 as a rating) different points with no variance in-between? And what about that time when you end up with too many albums of one given rank. 7 is the usually the middle-ground for most album scores - 7 (or 3.5/5 if you halve that) being that respectable, 'good' region that denounces obvious flaws and lackluster moments, but at the same time praises more-so the positive majority. Fast forward to December, aka the month that generates hundreds upon hundreds of end-of-year lists, and you've left yourself a horde of records with which you've rated as being 7, if you haven't already created that problem with the 8 category. What then? How are you meant to determine which '7' album is the better 7? Are there some 7's that are weaker than other 7's because this 7 didn't offer what that 7 presented so provokingly better. Oh, but wait, you forgot the 7 that had that one track everybody, you included, loved. How does that factor in?

While not necessarily derogatory to the albums themselves, the fact that some critics end up with albums placed higher/ower than those presented on the same rung of the rating ladder, not only highlights the flaws in a whole-number system, but in doing so there's a growing loom of questions that crop up up over how strong or weak an album might have been on that particular rung so as to end up, potentially in some cases, not in the Top 50 of a given year. This is one of the reasons why I support decimal values in ratings, or if not that then a system that allows albums to be scored in double-figures as opposed to the narrow, bordered single value. The likes of Metacritic and Pitchfork do this alternative justice (even if the users/writers don't necessarily have the best reputations when it comes to using them, as opposed to abusing them), and in doing so it offers the verbal and textual communication of critique to come off a lot more fractionate and accurate. Again, an album might end up with a 7, but here the system feels more like an expanse or a slider than a tier-like enclosure or ladder - numeric blocks becoming numeric regions instead. Even if an album still falls under the 'good' brand, a 7.8 for example may suggest that while it may not be strong or versatile enough to be deemed 'great', that addition of an eighth of a whole means (from the writer's perspective at least) the album can be looked at multilaterally, than unilaterally. Yes it's flawed, yes it's still good...but, and this is the crucial bit, it could give those low-8's a run for their money when experienced by a certain individual at a certain point in time.

This opening-out of the field to cater to wanting an album to lay around a value, rather than be conjoined to it, has made its way into pictorial representations too. Stars now show up chopped in half, and while it still works out to equating to whole numbers out of 10, it at least shows that the writer - and indeed the magazine/site hosting it - are aware of creativity's wideness for differing quality and attempts to profess it. Some critics, perhaps not music-related, have taken the cut-up concept to new lengths via the likes of pie charts that start as a hollow circular box, and with the bigger the slice, gets closer to reaching full volume. While this still holds onto that concept of offering breadth, even if by the narrowest of degrees and highlights, the visual side tends to address more a strict focus on the group's visual identity and theme, rather than necessarily reflecting the opinionated 'facts' as to the album's content. Take colour-coded representations of a score. Would you be put off looking into an album should it lie not close enough to the 'brilliant' end of the gradient sweeping from red all the way up to green (or blue in some cases)? Visuals in rating systems aren't an absolute problem. Sometimes they add a necessity to that removing of margins and borders that isolate given terms and phrases to reflect good/bad traits. So long as review sites and magazines focus more on the written details, and leave the evaluation of score and imagery to being that of a summing-up of what's come before, there shouldn't be too much of a problem. Otherwise, what's the point of writing any words if a picture...as they say...tells a thousand of them, instantly.

But going back to this idea of rejecting narrowness in scores, perhaps one of the more surprising alternates in rating system, is what most will surmise - and likely recall perhaps in a mix of nostalgic cringe and grimace - as mimicking school grading. We Brits know full well of the alphabetical range from A down to F, but for those in America, pluses and minuses give a clearer idea as to how close one was to the next letter up...or equally so, the next letter down. A-, B+, C+. The charm and peculiar benefit to this - in an age where the amount of blogs and young, wannabe journalists continues to be on the rise - is that the system in itself holds a kind of contextual reminder to what these particular values mean. A = success, B = great, C = good effort, but could do better etc. And with the extra bracketing of opinion in the form of an accompanying + or - symbol, while it doesn't offer as wide a field as decimal values do, it at least suggests an album or piece or work - if improved upon slightly, or continued on with greater improv - could likely find itself in the next region of grading should it decide to improve and progress, like all records should. Even if the positive/negative symbol is vacant from the end score, because of their use previous, it offers the humble lettering up with that same presiding of a wider bracket of opinion.

It's clear that there are numerous ways to represent a given value on a record, and there's never such a thing as the one underlining perfect system. The argument however, to conclude, is that with such differences of opinion brings a contrast in representing such opinion in a visual means, some I feel don't go far enough to properly representing the specifics and key factors that make an album objectively good, bad or in-between. So long as reviews - and further to that, the writers - maintain a decency of focus regarding properly valuing the quality (or lack of) in a given album, readers should be able to get a clearer idea on how far up the tier a particular sound lies when it comes to, as mentioned, comparing and contrasting a mass of creative ideas near the end of a year. But this is not all about deciding which is the best of the best; simply put, this crucial element in music journalism is a reminder that areas such as this (especially when it's written/presented from that of a coherent and realist perspective) require at the most, a decent level of depth and independence so as to give not just each album a scored identity, but too the reviews themselves the proof that its content is considered, and carefully thought. There are of course some who take to highlighting the best and the worst of an album (good tracks & bad tracks) and this unorthodox expansion of rating an album is good to see. But verbally, from the perspective of both a listener and a writer, I find rhetoric to stand nowhere as near effective as reason; priority is always in fleshing out the positive from the negative...and vice versa. And that's difficult to execute when you're limiting yourself to such narrow-margined scoring systems.
~Jordan

No comments:

Post a Comment