SpinTunes 5 Judging Wrap-Up

Except for a few loose ends (like announcing the winner) that are in others’ hands, SpinTunes 5 is pretty much done, at least for me as a judge. It’s been interesting, challenging and worthwhile. Thanks to Spin for inviting me.

My experience judging hasn’t changed my mind about any of my previous recommendations for SpinTunes. I feel them all more strongly than ever. Now that I’ve been a judge, I’ve got some added perspectives. Here are some reasons to be a judge and advice if you do, new recommendations, and finally some regrets.

To Judge Or Not To Judge

Not that I think it should be mandatory, but I think any SpinTunes regulars ought to judge at some point, like putting your time in on jury duty to serve your community.

There are downsides. You can’t enter songs unless you’re willing to wait until the contest is over to do only shadows. It takes time — a whole lot if you want to do a really good job. You open yourself up to criticism and can’t expect to please everyone every time, just the same as you do when you put your artistic work out into the world. Some of the behind-the-scenes discussions among judges and admins can be challenging and even unpleasantly counterproductive.

Still, the community benefits from judges with songwriting experience in general and SpinTunes experience in particular. Reviews and opinions can still vary among judges, but experience makes them more informed, and I think that makes for more worthwhile reviews. This can not only improve the integrity of contest results. It also gives you the opportunity to contribute to other artists, and in the end that may be the most important result of participating in SpinTunes as either an entrant or a judge.

I think you can end up a better artist yourself, too. To articulate your opinions — and make them public — forces you to think more about your perspective, your aesthetic, your knowledge — and to stand by all of it. That awareness seem only likely to strengthen your own future work.

If you do judge, here’s some advice:

  • Be open-minded and collaborative in behind-the-scenes discussions and have no expectations of them. It’s Spin’s show, and what he says goes. It’s not a democracy, it’s a benevolent dictatorship. And of course you’ll also probably disagree at times with the other judges. Just put your two cents in but then let them fall where they may.
  • Even so, put those two cents in as if you weren’t going to just let them fall. If there’s something you want to try to influence — challenge choices or descriptions, qualification decisions, etc. — speak up, sooner rather than later, and don’t wait for someone else to bring up the topic. Everyone is busy, time is always limited, things fall to the wayside, and if you don’t speak up you might find there’s suddenly not enough opportunity to try to make a difference about something important before decisions are made and put out to the world.
  • Be as objective about your subjectivity as you can. Judging is inherently subjective, but try not to let arbitrary whims play into your reviews. Have an approach and stick with it, and take each entry as much as possible only on the terms of the specific challenge and, beyond that, its own terms. There’s no good reason for any other personal biases, musical or otherwise, to come into play.
  • Be confident about who you are and what you like and know before you review, and try hard to only write things you’ll be willing to stand by…
  • … but also be willing to admit when you were off base, uninformed, etc. (See below for me doing a bit of that.)
  • Avoid too much discussion about your reviews and the contest in general. It’s too easy to either stand by your reviews too stubbornly or not enough. Write them well then try to let them speak for themselves. Then let it go.

Qualification, Challenges and Shadows

Qualification is a yes/no, black/white issue. A song ends up qualified or not, and it can be disqualified because of things related to a particular challenge or because of more general rules. In fact, the only two SpinTunes 5 DQs happened because of the general rule about entries needing to have lyrics. No DQ came from any failure to meet something unique to a challenge this time.

Judging how well a song meets a challenge is as qualitative and subjective as judging any other aspect of a song, and it’s often subject to many different factors. Music style and lyrical content both came into play for Round 2’s “pump up” songs. Number of characters, scope of story, dramatization and various other characteristics came into play for Round 3’s mini-operas. Something could seem black on one of these factors while the others remain white, or they can all just seem shades of gray.

Shadows are songs that aren’t vying in formal competition. Until now, you’re labeled a shadow if you simply choose to be one, or if you try to make a qualifying entry but miss the deadline. Either way, judges don’t have to review shadows, but if they do, it’s always up to them whether to rank them amongst the qualifying entries or not rank them at all.

As an entrant or as a judge, whether dealing with a true shadow, a deadline victim shadow, or an entry that DQs for another reason, I’d still be interested in feedback for those submissions, including comparison to the other songs in the round.

It definitely makes sense for shadows and DQs to be lumped at the bottom in round totals, since that’s what determines who moves on. But just because that’s needed for that purpose, there’s no reason judges need to lump those songs at the bottom of their own reviews. Some judges strew shadows among their rankings, and this is understood to be no problem. Yet when I posed treating DQs the same, I was told that anything other than pushing them all the way to the bottom made no sense and didn’t even seem possible to do.

Now, I did rank this contest’s two DQs lowest in my rankings, but that was total coincidence, not a foregone conclusion. When there’s so much else to every song, I can easily imagine giving a medium or even a fairly high score and rank to a song that gets DQ’d — even a song that I myself would vote to DQ. Whatever black-and-white factor justified the DQ, I’d score it low in that category, and all the other categories would remain up for grabs, to be looked at in themselves. It’s an advantage of the kind of scoring system I use that works to contestants’ advantage in lots of ways, keeping any one factor from weighing too heavily in an arbitrary or biased way, whether the judge is conscious of doing so or not. Those same benefits would extend to DQ reviews just as they would to shadow reviews.

As long as judges are allowed to rank however they want, no SpinTunes rules changes would be needed. I’d simply recommend that judges not automatically move shadows and DQs to the bottoms of their lists. The question of moving on is simply separate from the question of what a judge thinks of a round’s entries. Shadow and DQ’d entrants are likely to appreciate and learn from seeing where they fall compared to everyone else. Those songs can be factored out for the official combined rankings, as they were with my reviews, which turned out to be not only possible but easy to do, just as has happened in the past when various judges have ranked shadows.

One extra minor point. It’s one thing to shadow a challenge weeks or months or years after a challenge occurs. You know you’re just shadowing, so there’s no point considering you a deadline victim. But if you intended to qualify and failed because you missed the deadline and still turned in your entry, it might be more appropriate to think of that entry as a DQ for breaking a rule of the challenge just like anyone who breaks any other general or challenge-specific rule is a DQ. Shadow would be a label reserved for those who intended to shadow from the start. In the end, it’s mostly semantics, especially if judges can treat shadows and DQs equally in their reviews, which they can.

One extra major point: Judging freedom currently allows for some judges to prize the challenge highest above all else while others may only consider it to inform a DQ decision and then entirely ignore it in their reviews. I find this distressing for a challenge-based contest and would hope for some judging guidelines to be put into place — whether through a scoring system or otherwise — to smooth out at least some of the most significant potential inconsistencies across judges, like this one.

The Best Challenges

An extra reason why a scoring system is a good thing: It can tell us which rounds overall produced the best songs. The simple rankings can’t say a single thing about that.

Obviously all judges’ scores would be figured in, but since no two have used the same system so far, let’s just take as an example whatever we can see if we look at some averages from my SpinTunes 5 scores. Probably most meaningful, apples to apples, would be:

Final round qualified entrants’ scores from each round:

  • Round 1: 40.1
  • Round 2: 43.8
  • Round 3: 42.5
  • Round 4: 38.7

Top four qualified entrants from each round (i.e., no shadows):

  • Round 1: 48.6
  • Round 2: 45.4
  • Round 3: 42.6
  • Round 4: 38.7

Top nine entrants from each round, including shadows, since nine was the smallest number of total entries for any round (Round 3):

  • Round 1: 45.6
  • Round 2: 42.9
  • Round 3: 38.5
  • Round 4: 42.5

Except for Round 1 on the first list and Round 4 on the first list (and that was due only to an abundance of solid Round 4 shadows), on every list the overall song quality happened to go down steadily with each passing round. Did entrants get burned out as time went on? Was there something inherent about the types of challenges that made each one produce “worse” songs than the previous one? Are “better” contestants being weeded out too soon because eliminations can knock out otherwise strong contestants when they make a flukey misstep, so maybe a different contest scheme would work better? Any of these things could explain the trend.

Then look at how the final round qualifiers’ scores compared to the top four qualifiers for each round. Of course, they’re the same for Round 4, and they’re almost identical for Round 3 because my choices lined up closely to the overall round results, and Round 3 would also have matched perfectly if we were looking at average scores across all judges. So these two rounds don’t communicate anything. But look at the other two rounds, where the Round 4 contestants consistently underperformed the top entries. This also could suggest that “better” contestants got weeded out earlier on. Or it could suggest that some people are sporadic stars who do well but not consistently enough to see things through to the end. The truth may lie somewhere in between.

Independent of the overall clear downward trend over the course of the contest, would average scores help point the way toward picking better challenges, or perhaps at least picking a better order for the challenges so that the contest feels like it gains momentum with each passing round? Is anyone even interested in picking challenges based on the types that are likely to actually produce the best and most enjoyable songs, or are there other reasons for picking certain challenges?

It’s all food for thought. With a consistent scoring system used across judges, there’s at least the option of thinking about this stuff. Without it, this all goes completely unnoticed.

(For me, at least, it’s nice to see some quantitative evidence for something that I’ve just abstractly felt in the past, which is that, except for the inherent interest of the competition logistics themselves, things often seem to get generally less exciting as the contest goes on. The challenges may sound interesting, but listening to the songs doesn’t necessarily, and then things often end somewhat anticlimactically. I guess I may not be imagining all of that.)

Judging the Judges

It might be valuable if there was a way for judges to get feedback about their judging. No need to devolve into some endless spiral of then judging the judges of the judges, etc. And I’m not talking about the occasional comments on blogs or Facebook like “You totally misunderstood my song, jerk,” or “Gee, you have such great taste since you liked my song so much!” I mean something a bit more detached and general, like the judging itself is supposed to be.

This could lead to stronger judging as part of future SpinTunes contests. Spin already asks entrants for feedback about the judges. Maybe there’s a way to get some of that information published anonymously, and maybe without even naming the particular judges commented on. A pool of information, about things that generally did and didn’t work from judges, could help inform how future judges approach their reviews.

Didn’t someone, back in SpinTunes 1, do some number-crunching to see how each of the judges’ rankings compared to the overall rankings? You could then see which judge was most “right” (meaning closest to the overall consensus) and which ones were farther off. It would be interesting to see that for every SpinTunes contest/round, and then to notice how those judges approached things. Judges certainly don’t need to be of one mind every time, but it would likely benefit the contest if judges judged in a way that tended to be in the ballpark of what the contest results tend to be. It would probably mean that much stronger feedback to help artists grow as well.

Regrets

As much as I tried to articulate myself well, I saw a number of places where, in hindsight, I realize I wasn’t as clear as I could have been. Some of it’s just like writing a song. Leonard da Vinci said, “Art is never finished, only abandoned.” You can keep adding, subtracting, changing, but at some point you just have to stop, let it go, call it “done,” and move on. Beyond that, though, there are a couple of things I feel I was unfair about. One I could have helped, the other I couldn’t (nobody could), but I’d like to mention them both.

Emperor Gum’s “Pygmalion” in Round 1: The free-flowing looseness of the composition and song structure made me feel a bit lost even with the words. I scored Challenge and Lyric Content both as Fair. In hindsight, reconsidering the lyrics more directly on their own, I think it was really quite nice, enough for a Good in both categories and possibly even an Excellent for Lyric Content, jumping it up to 14th or 15th place in my rankings. My own scoring system, separating qualities out into categories, was supposed to help me see just this sort of thing, to help me evaluate each element on its own terms without influence from the rest. The score jump wouldn’t have made a difference in the overall results, which still would have caused elimination. But I feel it wasn’t fair to Emperor Gum that I failed at my own attempt to keep the different aspects of a song from interfering with each other’s value. Maybe this happened for other songs and other rounds, too, and I’m still unaware of it. It does go to show how presentation really does affect perception, even when someone is actively trying to avoid being affected.

Living with a song, and Felix Frost’s work in particular: I was often somewhat critical about Felix Frost’s work. I wasn’t the only one, and I think I had good reason most of the time. At the same time, maybe it’s just the kind of work that’s harder to come across well in just a couple of listens. For better or worse (and sometimes it really can be either), it’s a lot easier to have a positive first impression with something accessible and catchy and easy to make sense of. More complex pieces, or otherwise unusual pieces, or even those that just use styles and motifs that you may generally not be a fan of, may take more familiarity, living with them a while, in order to develop deeper appreciation. I’m not saying I’d definitely fall in love with Frost’s work if I listened many more times, but I’m fairly sure that I’d at least develop some amount of better appreciation of it with more familiarity, and beyond that I might actually like it more as well. The same could go for many other songs from many other entrants, especially (for me, at least) those that have a compositional looseness that can be hard to latch onto with limited exposure. You have to wonder, how would we judge a whole round differently if we had to listen to all the songs a few times a week for several months before writing our reviews? I’m not saying we should do that, but it’s a worthwhile question, about an issue that maybe inherently works against entries with certain qualities. Nothing to be done. And I feel bad about it. But I feel good being aware of the situation.

Leave comments 🙂

2 comments for “SpinTunes 5 Judging Wrap-Up

  1. Aw, thanks Mark. Not that I felt crushed by being eliminated in round one again, but I was a little dissapointed to come second last that round. I think we have very different ideas about songwriting, and art being subjective that’s completely fine. You did a really good job judging, I was impressed by your detailed breakdown of each song. In particular, making the challange the major contributor to their rank was interesting.

    In regards the placement of the shadows and DQs; I think the shadows get placed at the bottom for clarity of who came where. I did like how you mixed them togeather, though. The DQs are a different matter, they aren’t competing on the same level as the other songs so it seems wrong to directly compare them by adding them to the rankings. Perhaps if they are eliminated by being too short, otherwise I think they should be last. I don’t think judges should be picked or encouraged to rank contestants close to the judging average. I feel like that’s a self perpetuating system.

    • Graham, thanks for all your thoughts.

      We do have some different ideas about songwriting, but I think — and this is based on me thinking back to other work I’ve heard of yours also, not just “Pygmalion” — the biggest difference is simply in our sense of structure. You like melodies and song structures that are a lot looser than what I prefer. Beyond that, I often enjoy the essentials of your musical sounds, and I generally appreciate what you write about and how you say what you say. So not so different as we may seem. The structure thing just has such a significant impact on end results that it’s almost as if you write in a style/genre that I’m not a fan of, so it makes the difference seem bigger than it is.

      With a DQ, I totally see your point. My point, though, is that DQs can happen for so many reasons that it’s impossible to generalize whether or it’s fair or unfair to compare them with the rest. I could easily have imagined a Round 3 mini-opera submission with a single character but that otherwise was brilliant about its use of the challenge and also brilliant generally in all other respects. In that situation, we’d have no choice but to DQ, because of the clear requirement for two characters. And yet I could imagine myself denying only an Excellent for Challenge, still giving a Good with enough other great opera elements, and then giving really good scores across the other categories. A song like that would feel (to me) very comparable to the others, and I might even end up ranking it at the very top of my list. I can think of lots of other similar examples like that. I can also think of plenty of examples of songs that do miserable things that would cause most any judge to rank them badly even though they do nothing that could get them DQd. In the end it’s clear that a song’s rank is based on all sorts of things coming together, and that a song that did much better or worse in any one area might have changed its rank significantly. All of this leads me to feel like there’s no real problem at all combining all entries — qualified, shadow, DQ — in the rankings. You just have to take those ranking for the mish-mosh they always mean anyway.

      I actually agree with you about the self-perpetuating thing being a bad idea and even felt that when I wrote it. Even so, looking at overall scores from a single scoring system could still lead to not only some “hey that’s interesting” stuff but also stuff that could make a positive difference in how challenges are chosen, how songs are reviewed, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *