A Transparent Treatise of Inconsistency in AJ Review and Author Processes

The subject of Review Consistency regularly comes up on the boards again, so we felt the time was right for the matter to be addressed, for what it’s worth.

Based on the astute observations of some of our long time members (ahem… privet, Kurly), we can either deny the existence of inconsistency outright, stonewalling, or treat it as a reality that needs to be managed in an ongoing way.

But we’ll say it again, we care about what you think, so let’s begin by admitting the truth.

At the end of the day, it would be silly to steadfastly proclaim that inconsistency never exists here.

That would be simply untrue.

Absolute consistency is an abstract concept that applies only in theory, not in practice, so long as people are making decisions in the complex world of appraising the objective commercial potential of art - a subjective medium by definition.

So there. We can say this very, very clearly. Inconsistency unquestionably exists in some areas of the review process, and always will. In fact, believe it or not it’s something that needs to be managed on a regular, near daily basis. It’s not something we hide from, internally, as all reviewers know. But do we owe it to you to tell you a little bit more about it? That’s a debatable point, for a business, but I’d like to think it might help any annoyed people understand what happens, and why, even if it will not change the outcome much soon.

At this point, the most important question to answer, to deal with managing inconsistency is - How does it manifest itself, and why?

Let’s dig a little deeper to get to the core of the issue, to see what we can learn.

I. Team Inconsistency

First off, we currently have a team with 11 awesome individuals listening to submissions. The people reviewing your tracks are ALL well-meaning and well intentioned, driven by a sense of responsibility toward the quality of the library and an honourable respect toward the community of authors. If anyone has any doubts about this, or feels there is any conflict of interest or personal vendetta, the fact of the matter is you don’t know each person on our team well enough, simply put.

Please keep in mind that we do have a system that aims to minimize inconsistency on a daily basis. However we will never be able to eradicate it outright, so long as people, especially groups, are deciding things

What we can tell you for starters is that inconsistency is practically non-existent in the extremes. Even with 11 people. That means the very best and the very worst submissions. In the absence of strong misrepresentation indicators, stellar submissions are practically always accepted, and categorically inadequate ones are rejected. We can agree on this, yes?

Now everyone can hear how great some of the best tracks we receive sound, but unfortunately the very worst of the lot never see the light of day. And we get a lot of of both. We’d really love to give you examples of the “best of the worst” we hear, for your elucidation, if anything, but for privacy reasons we simply can’t. But we have a feeling most of you wouldn’t believe your ears.

If you can picture as an example, a track that sounds like a locomotive having intimate relations with a chainsaw while a music box sample plays in the background in a completely different key, with birds singing here and there, and random atonal electronic vocal accents… Well, you get the picture. Very creative perhaps, but AudioJungle is not the right library for that! :slight_smile:

Forgive the digression… Back to the point.

The biggest incidence of inconsistency happens with borderline tracks that are submitted to us.

Wait… What? What is a borderline track??

We thought you’d ask. To answer that, first we need to understand what “borderline” means, as far as we’re concerned.

Here’s the thing - “Borderline" is not a line at all. It is a Spectrum

687474703a2f2f7777772e616467332e636f6d2f74656d702f677261797363616c6530312e6a7067

In this sense, a borderline track is one that falls somewhere closer to the middle. A borderline track does not stand out as being exceedingly flawlessly executed and/or with commercial potential, for any number of composition, arrangement, or production reasons, but its flaws do not immediately make it sound completely or more extremely lacking, for general commercial purposes.

Essentially, that’s where we’ll experience the most inconsistency when we review. In that middle third of the gray zone. And the closer to the middle gray a track falls, the more likely it can be inconsistently reviewed.

But How does that happen? And why?

Great questions.

If a track is very borderline, the bottom line is it can go both ways, theoretically, and that also may depend on a number of factors…

As a human reviewer, previously heard content may variably influence a borderline decision regardless of the training and experience. It is the human nature of habituation.

That’s what can make a “3” or “4” grayscale feel like a “5” or “6” grayscale, or the other way around

Imagine you heard several great tracks in a row right before hearing a more average one… The borderline WILL sound worse to your ears. The middle spectrum can shift away from you.

Yet If you heard several entirely unacceptable submissions immediately first, the borderline may get approved. The shift may also happen in the other direction

Throw in a bunch of different people in the mix at different points in their shift, and you get an idea of how things can get hazy. And yet there is strong training to remain focused and disciplined. In many cases the second reviewer may choose to defer to the first one, but this is not always possible obviously.

Now if a reviewer accepts a track that could arguably be rejected, (barring system errors) it’s significantly more often than not a borderline track, that may or may not add value to the library.

Vice versa, if a reviewer Hard Rejects a track that could arguably be accepted, well, it boils down to the same thing.

Ok, so how come my track was accepted when I resubmitted With NO changes??

Before saying anything else on this, we will just clarify - If you resubmit a Hard Rejected track without making any changes, (1) You are breaking the rules and (2) You are taking a risk that grows each time you repeat this.

Let’s assume you throw caution to the wind. If that rejected track is resubmitted with no changes, several things can happen.

1 - Most likely, the track will simply be rejected again, if it was on the lesser borderline side, objectively and subjectively.

2 - If it is matched by the system as a resubmission with no changes, it may or may not be reviewed again by another reviewer. The account may get flagged in our database either way.

3 - Less likely, statistically, if it is determined that the item was acceptable on the second pass, it may get accepted. In this case the first reviewer may get a notice and a consistency advisement may be dispatched to the team for ongoing training. The account may still be flagged for resubmitting hard rejected content

4 - Also less likely, if the rejection is accepted the second time, but the second reviewer accepted incorrectly, the second reviewer may get the notice and a consistency advisement may also be dispatched to team for training purposes.

5 - Believe it or not, we also have instances where a borderline track that should be rejected gets accepted on the First pass, and it only gets noticed when an update is submitted, or otherwise. While this exemplifies and publicizes the notion of inconsistency when held under scrutiny, and we know people are watching, we won’t usually overturn those if there wasn’t a legitimate processing error, even if it creates the impression of more inconsistency! But the reviewer and team will still receive private advisory and notes for training.

It really needs to be understood how much awareness there is on the matter, and there are always ongoing efforts to counter significant imbalance in consistency.

To say that many rejected tracks are accepted when they are resubmitted unchanged is a relative statement. While the number is obviously not zero, it is a very minor proportion when looking at percentages. That much we can tell you without a doubt. So while the statement is not wholly untrue, it is really a much more statistically minor fraction, albeit one that can attract visibility we readily admit.

Ultimately, Resubmitted Hard rejections will be usually hard rejected again when observed, accounts flagged, and if an emergent pattern of the practice is verified, whereby it is becomes evident that there is a habit of resubmitting hard rejected content without any changes or reasonable communications to the team, the account’s upload submission rights will be revoked, pending further communication via support channels. We don’t really have much of a choice when comms are repeatedly ignored.

Bottom line, do the referees see each and every foul that happens on the football field? You know the answer answer to this.

But when a pattern of infringement is verified, well, we have to have yellow and red cards too. That’s just the way it is. So please govern yourselves accordingly.

The regrettable part, in all honesty, is that resubmitting rejected tracks with no changes, accepted or not, does nothing to improve one’s skill at audio production. It just demonstrates a conceivable short-sightedness, where being accepted is more valued than producing content that could potentially sell more. Way more often than not, it devalues both the library and the portfolio, with a borderline content that is less likely to get sales AND make the rest of the portfolio less likely to be browsed further too. Please think about that, people.

And that’s why we have to draw the line sometimes.

II. Inconsistency over time

A separate issue, still worth mentioning while we’re on the topic, is the appearance of inconsistency of the approval standard as the library has evolved over the years.

We won’t delve as deeply, but the truth of the matter is “Acceptance standards have continued to evolve over the months and years, so something that may have been previously accepted in the past would not be accepted today.” Yep, that’s coming straight from the textbook.

This is why it is incorrect to make assumptions or comparisons with any items that were accepted 1-2 or more years ago, when considering submitting content today. Doing so could definitely engender a strong evidence of inconsistency. But it does not reasonably apply today. Apples with apples, oranges with oranges, please.

Is there a plan for this subset of the issue in the future? Yes, but this post is not the place to expound upon that. Something to discuss later this year perhaps :slight_smile:

How can inconsistency be reduced?

Finally, winding to a conclusion, we can share a last few important points. We already do have consistency measures in place. They can never be perfect but the fact that they exist validates the awareness of the issue, internally.

In the queue, many of the items that we place on hold are ones that go through a second review without you realizing it. However, for efficiency reasons it’s simply not possible to assign 2 or more people to each submission received. We also cannot create multiple filtering systems now without Hugely impacting the length of queue. And then we’d be lambasted for that right? :slight_smile:

But we do encourage our team to honestly seek another ear when doubts are felt. This minimizes inconsistency in many ways more than immediately realized.

We also scan the rejection forum threads with Community Team, and take appropriate action when warranted. Envato does believe in the value of “Fair Go”, as much as possible, after all.

But sometimes, to reduce inconstancy in the future, and as authors, what’s best called for is some really brutal self-honesty. When we ask ourselves: "Does my work stack up? Am I in any way closer to the borderline than I think I am?”

This is a question for all authors, and also for reviewers.

Anyone can always look for any ways that their work might be less adequate, or where any improvements are possible, but unfortunately as individuals we are all potentially prone to getting hindered, without even knowing it, by what’s known as the Dunning-Kruger Effect. It’s not a plugin. :stuck_out_tongue: It’s an inescapable quandary of consciousness that can creep up in individuals of all levels. But at the end of the line, through persistent trials and tribulations, many believe it’s something anybody can surmount if there’s an open and honest will to do so, when looking inward upon oneself. And when each next level is reached, a veil is lifted and we can often see what we simply were not able to be aware of before. Because we are all sometimes blind to ourselves in the present.

If anyone can sincerely and introspectively admit to themselves that their submissions may be more so classified as borderline work, and they wish to minimize the experience of inconsistency, for themselves, they can either make efforts to improve their craft with time, patience and persistent dedication, growing as an artist, or face the fact that they will inevitably be subject to the inherent reality of inadvertent inconsistency in the review process of borderline submissions.

The review team is striving to do the same too, every day.

In any field, the other regrettable option is to remain annoyed or embittered and brazenly point fingers at the world for being what it is. But that doesn’t change it alone. Never will. It’s like cursing the ocean for making you wet each time you try to swim in it.

As the old saying goes, “if you’re not part of the solution, you’re part of the problem”.

While we cannot explain to all authors every single dynamic of the backend review system and expect them to suggest realistically deployable methodologies, part the solution can start with oneself. The only part we can control for ourselves.

By making every effort and using every avenue available to learn and get a bit better each day, the measure of inconsistency will be narrowed as two groups take steps toward one another, author and reviewer. But each group can only reach a midway point, so the actual measure between the two parties is dependent on the steps both parties take.

That leads to progress, and progresses beyond the realm of the borderline, and That, regardless of any undeniable inconsistency or imperfection of the process, will undoubtedly make the results more consistent because rejections will gradually occur less and less.

You can take it or you can leave it, but that’s how it happens here.

Have a great Sunday, forum friends, thanks for taking the time.

===============================================

Follow-up answers to questions, Why hard rejection feedback is not possible: Part 1 and Part 2

2 Likes

Excellent post. This should be required reading for AJ authors. Thanks!

Great post!

But given how successful Envato has been I would love to see you add reviewers so borderline rejects would always be reviewed by a second person without that second person knowing it was already rejected. And it does not have to impact the queue if you simply absorb the additional cost of additional reviewers out of your ample profits!

Also, add a tiny bit of time on each reject to tell the author why the item is being rejected. Again, this will not slow down the queue if you simply add reviewers and absorb the additional cost out of profits! If a reviewer goes through a process to reject why can they not have a simple but complete form to check off the criteria for the author? Yes, I know it is not your job to teach, blah, blah. But these authors provide the means for Envato to succeed! They deserve the respect of knowing why their submission was rejected in my opinion!

Obviously you do not have to do either of these things because you are successful enough as it is. But these things would not cost a lot and would greatly improve the experience for your suppliers and the community in general!

2 Likes

Thank you, Adrien. To be honest - you don’t need to write this message, but you did. I mean all this explanations is awesome… Awesome for those who forgot a nature of this place. This is a market. Not chat, not just another friendly community, not a school for amateurs, not a personal psychologist (though sometimes this place working as all 4 things together). But first of all, this is a working, evolving, growing, selling (thanks God) marketplace. For example, to maintain consistency of my own portfolio, i review constantly old tracks and delete some of them, and spend more time learning, listening best selling and trending tracks. Simply because i want to have more quality tracks in my portfolio. And i believe, if anyone want to succeed here, he must follow a quailty level of top tracks. That’s it. Again, thank you for explaining all nuances of reviewers job. We appeciate awesome, helpful (though not ideal - who ideal here???) work of your team and you.

1 Like

Thanks for this post, it’s a great exercise of transparency. Rejections normally do to improve oneself:)

many" thanks ADG3studios for addressing this ancient issue,

what i learned from your post , you admit the inconsistent of a reviewer member they just humans like us , they need to rest for a while and by pushing humans into their limits , mistakes will occur.

but i must admit rejection will makes the author better at creating music but sometimes the top quality file can be rejected by mistakes.

suggestion from me : i belive there are 1000+ submission a day and only 11 reviewer handle this gigantic queue…sure they will burn out quickly by add 3 or 5 more member will relieve the pain they received. as LeatherwingStudios says…please add more reviewer and please spare more time with rejection reason… it’s the only thing i ask from you and your team. and maybe the rest author in here.

ADG3studios said

If you can picture as an example, a track that sounds like a locomotive having intimate relations with a chainsaw while a music box sample plays in the background in a completely different key, with birds singing here and there, and random atonal electronic vocal accents… Well, you get the picture. Very creative perhaps, but AudioJungle is not the right library for that! :slight_smile:

is this even legal? :D

Thank you for the excellent, transparent and clarifying post. I actually feel shamed and stupid for sometimes giving advice to people that they should resubmit hard rejected tunes in borderline cases where my own ears say that the quality and concept has been better than many well selling tunes of late. I have done this myself a few times and while I believe your saying these are marginal cases on some of those times the tune has been accepted (and also has sold licences). In any case I apologise for the fouls, I will cease to do this and take this post to my heart.

Thanks for the article - it was a very interesting read. I’ve only been here for a few weeks, but you guys seem to run a tight ship and I’ve been impressed by the quality control.

I had one lingering question: does the author’s past work or rejections ever come into play when reviewing a track or are the tracks anonymous? I don’t ask because of preferential treatment, but more like I’m worried that my past tracks with lower qualities that were rejected will negatively effect how I look to the reviewers.

This is quite an explanation.

ps just 11 reviewers? wow…
Does this include SFX submissions as well?

ADG3studios said

…the Dunning-Kruger Effect. It’s not a plugin. :stuck_out_tongue:

Hahah.

Very insightful post. Thanks for reaching out to the community.

Thank you, Adrien, to spell all this out for us! It’s always good to have this reminder! :slight_smile:

Thanks for the post, Adrien. As a former manager of a company, I know how the “human” element of rule keeping plays out!

I’m glad you mentioned the organic nature of art, judgement, and the like. It’s absolutely essential to understand that, in order to grasp the situation correctly. Very well put!

Hello, I don’t know if this has been covered before, but I’m wondering if the reviewers work independently in their own studios (using their own monitoring setups) or is there a central office at which they all work?

Maybe the best thing to do Adrien is implement a new rule…any threads that are clearly a complaint against a hard rejection will be deleted or locked…just like self promotion or spam.

Thanks Adrien. Well said and good explanation so people get a better understanding about this issue!

Sky-Productions idea is not bad at all. Or rejected tracks could be moved into “Item discussions”.

While I’m sure it’s something that Adrien and his team strive for, perfection or 100% consistency will never be attained as long as the business requires human interaction. They’re people, not robots. That being said, I think as long as they push for consistency in rule enforcement, most here are happy campers. We all understand the human element.

+1

||+1250096|Sky-Productions said-|| Maybe the best thing to do Adrien is implement a new rule...any threads that are clearly a complaint against a hard rejection will be deleted or locked...just like self promotion or spam.
||+1250096|Sky-Productions said-|| Maybe the best thing to do Adrien is implement a new rule...any threads that are clearly a complaint against a hard rejection will be deleted or locked...just like self promotion or spam.

Or, just move them to the ‘Item Discussion’ forum where they should have been in the first place.

Thanks for the post. It’s incredible work