Can AI write a good award entry? Should you use AI to write your award entry?
Chris Robinson
MD, Boost Awards
This article is written by Chris Robinson, MD of Boost Awards, without the use of AI
AI-generated award entries are prolific and increasing. According to recent (late-2024) research published by the Independent Awards Standards Council, awards judges estimated that about a third of entries (average estimate was 36%) they had seen in the previous 12 months were AI-generated.
Both awards judges and organisers surveyed believe that the proportion of award entries being AI-generated will increase with time.
It is therefore no surprise that I’m increasingly asked: ‘Why don’t I just use AI to write my award entry?’
Having spent a year discussing this topic with awards industry stakeholders, attending a summit for consultancy owners focused on AI risks and benefits, and most recently poring over the findings of the research mentioned, I would like to share an in-depth response to this superficially simple question.
First consideration – do judges care?
So, you’ve spent the last few months or years rolling out a successful project, initiative or strategy, and you want awards judges to hear your story and agree it deserves an award. Should you entrust AI with convincing the judges that you should win?
The first two questions here are… can awards judges spot AI-generated entries? And do they care?
Here are some statistics from the survey I mentioned earlier that answer these questions. Firstly, yes: 78.6% of judges believe they can spot AI-generated award entries due to telltale signs like a ‘lack of soul’ or having too much ‘waffle’ (amongst other indicators).
But do they care? It is a ‘yes’ here too. Surely, some might argue judges should just judge an entry on merit and ignore how it was written? A salient point is that, regardless of the writing style, you need judges to trust that the facts and evidence presented are to be believed – yet only 21.4% of judges agree with the statement: ‘I trust generative AI to be able to accurately represent the reality of the story being entered’.
21.4% of judges agree with the statement: ‘I trust generative AI to be able to accurately represent the reality of the story being entered’
This suggests that the decision to mark an entry down isn’t explicit; but in many cases, this decision will be arrived at consciously. It so happens that judges are split down the middle, 50/50, on whether to mark down an AI entry just because it is AI-generated. A minority, meanwhile – 14.3% – stated that they would like to see such entries disqualified altogether.
I’ve also spoken personally to a number of awards judges, and it is clear from those conversations that they would prefer to read human-generated content that reflects the passion and authenticity of the person nominating the project/person/team/business. Their bias, whether intentional or not, will favour the human writer when picking the winner.
Second consideration – do awards organisers care?
Surely it’s irrelevant whether an awards organiser cares about submissions being AI generated or not? The organisers aren’t (or shouldn’t be) the judges, after all: but they are the gatekeepers and rule-setters.
Although many awards organisers are ambivalent about AI-generated entries, the majority – whatever anyone else thinks – are against them.
It might come as some surprise to learn that 31.3% of organisers surveyed already filter out AI-generated award entries (either using people to spot them, or technology such as GPTZero, Grammarly, and Originality.aI).
An increasing number of awards platforms are building this vigilance into their systems and offering organisers the opportunity to enable filters. Furthermore, it looks as though like the latter will be doing just that: 65.6% of awards organisers stated that they would use tech-based filters if their awards platform supported it.
31.3% of awards organisers surveyed already filter out AI-generated award entries
We would encourage all awards to be transparent, if this is the case – for entrants and judges alike. And, sure enough, most of the awards that already apply filtering, have a written policy to this effect, or are planning to introduce one in their next iteration. An early example of this is a stipulation in the Learning Awards, which states: ‘Submissions relying solely on AI-generated content will be asked for resubmission.’
To conclude this point: yes, you could get AI to generate your award entry, which would be quick, easy and cheap; but there is a risk of having to resubmit or be disqualified. You would essentially need to comb over the entire AI-generated draft to ensure it is correct, reflects your voice and perception of the truth, AND doesn’t contain so much as a sniff of AI. You might as well write it from scratch, or dictate it to a skilled writer.
Third consideration – will AI add value?
From what I have seen, the output received by giving an award entry job to AI is not a million miles away from the output received by giving the same job to an intelligent and obedient English Degree graduate or journalist. OK, AI will turn it around a lot more quickly and cheaply, but within the same ballpark in terms of linguistic quality.
However, having spent literally decades writing award entries and training up colleagues at Boost, I can tell you that award entry writing is far, far more than simply documenting the material provided: and no, you cannot just drop a graduate into a senior consulting job and watch them fly.
Finding the right contacts to interview, asking the right questions, pushing back against poor answers, finding a winning angle, deciding the scope of the story, sorting the ‘wow’ from the ‘OK’ in the details, identifying the right evidence points, spotting the gaps, making tough decisions on additional research to conduct, identifying what to cull and what to keep… these are all skills that come with time and human understanding of context and nuance (aspects often cited as shortcomings of AI).
65.6% of awards organisers stated that they would use tech-based filters if their awards platform supported it
People who are highly skilled at using AI would argue that you could give the AI or graduate some examples of what a great award entry looks like: but even so, they will only do the best with what they are given.
Analogy time:
If you gave poor-quality ingredients to a robot cook and said, ‘Bake a cake with this’, it would do as it was instructed, maybe even checking what type of cake you wanted or deciding upon the best type of cake to produce with the ingredients it had been given; and it would then create a technically perfect cake that would tick the boxes. BUT, would it be good to eat? It would most likely only be as good as the ingredients, because the robot chef can’t taste the cake or challenge the quality of the ingredients… and the same is true for award entries. Award entries are meant to present the facts, yes. However, the craft lies in filtering the boring facts from the compelling facts, deciding on an angle that will engage the reader, deciding the killer stats that might evidence outstanding outcomes, and deciding what is missing. Has an AI ever said: ‘You know what? The research part of this story is weak’? No.
Next consideration – can you trust AI to be accurate?
As already mentioned, awards judges and organisers do not trust AI. But should you? Generative AI relies on maths, source data you supply, and deep pools of other data to decide which words follow on from the previous words. Each word choice is based on probability. It cannot tell if it is accidentally missing the context, sharing a bias, or misinterpreting the source material. It often takes itself down blind alleys, and (especially if doing its own research) too often generates complete nonsense -referred to as ‘hallucinating’.
Your award entry might be entirely accurate, but you will need to double-check every fact and stat, often having to go back to the source material to be entirely sure you have included all the facts and represented them accurately. In truth, you might as well do it yourself, or get a person you trust to do it for you.
As The University of Cambridge website so eloquently puts it: ‘Oops! We Automated Bullshit: ChatGPT is a bullshit generator’. Marc North of Durham University, at the conference I recently attended, said: ‘AI has no desire or requirement to tell the truth’.
Furthermore, if an awards judge has had an experience of AI hallucinating or getting it wrong, they will then harbour an ingrained bias against AI content. Therefore, if they suspect your content is AI-generated, you could fall foul of the first consideration here.
Another consideration – can it do harm?
With any business decision, you have to ask yourself: ‘What’s the worst that can happen?’ Well, if you thought the worst that can happen in this context is that you get disqualified from an awards programme, there actually is worse.
Firstly, if you are using AI without declaring it, a common issue is that if people ask: ‘Why did you write it like that?’ or ‘Why did you include this and not that?’, you will have no clue. You cannot go back to the AI and justify its decisions. What we keep finding is that it chooses waffle over data points, even when asked not to. It will often ignore the most important points to include, and you’ll consequently be made to look bad by not having good answers to challenges about editorial/narrative decisions. Then you have no choice but to admit you used AI; and that’s the last time you and your preferred AI tool will be asked to do that task.
For another thing, AI uses previous content to generate new content – and this can mean your supposedly new content might reflect unintentional bias or prejudices. For example, images are being criticised as overly sexualised, and assumptions about gender and race in society often creep into content.
Even worse, if you don’t use an enterprise AI licence, then everything you process is in essence the sharing of potentially confidential content with a third party. If you are under a privacy policy or non-disclosure agreement, then the act of using the online Chat GPT or equivalent is a clear breach of any policy, and the content you shared is going to be stored on a third-party server and used to train up future content.
And finally, worst of all, is the subject of intellectual property. You might find yourself accidentally infringing upon someone else’s copyright (multiple lawsuits are currently going through the courts), with your final content belonging to… no one knows! I asked a panel of AI experts, ‘If I use AI to write an award entry, who owns the IPR of the final content?’ and no one had a clear answer. It is still the Wild West.
The green and pleasant land of AI-generated content is not so pleasant after all. It is actually a minefield of risks and untested, un-litigated legal woes for the immediate and long-term future. Enter at your peril.
The final consideration – will it win?
If you are writing a press release or case study, then an average piece of content is arguably good enough. But if you are up against the best in the industry, your entry has to make your story sound worthiest of winning; and that is not easy. It isn’t enough to ask, ‘Can AI write an award entry?’ You have to ask: ‘Can AI write an award-winning entry?’ The answer to the first question is clearly a ‘yes’, but the answer to the second question, given everything you have read here, is ‘unlikely’. On top of this, I would argue that this will become increasingly less likely as more and more people start using AI to generate their run-of-the-mill award entries and AI-generated award entries becomes the baseline. I predict that we will then revert back to bringing humans into the mix more, this time to steal a march on the bots.
Conclusion
I do think that AI will revolutionise and disrupt many industries – including awards. We are using AI to automate routine processes, capture call notes, analyse datasets and other aspects that help with the day job. Interestingly the survey mentioned earlier suggested that awards judges and organisers are both fine about this. But when it comes to the point where the rubber hits the road – and judges’ eyeballs read our clients’ stories – that is where AI’s role becomes arm’s-length.
My reasoning goes beyond the rational arguments presented here, and towards a deep human need for trust and authenticity. I am convinced that as the line between ‘real’ and ‘realistic’ is blurred by AI, there will be a growing desire for help in spotting authentic human-generated content across all media. Awards platforms will add AI-spotting filters, and publishers will increasingly advertise AI-free content (in the same way as GMO was meant to revolutionise agriculture, but instead we saw ‘No GMO’ logos everywhere). The cynicism about whether content is authentic or not will undoubtedly bleed into the realm of awards judging.
AI will undoubtedly get better and better at generating content, including award entries, but it will only ever be an imitation of this very human craft, only as good as the content it is given, and the next word the algorithm posits should follow the previous word. I side with the awards judges and call for award entry writers to keep it real. Not just realistic.
Boost – a helping hand entering awards
Boost is the world’s first and largest award entry consultancy, having helped clients, from SMEs to multinationals, win over 2,000 credible business awards. Increase your chances of success significantly – call Boost on +44(0)1273 258703 today for a no-obligation chat about awards.
(C) This article was written by Chris Robinson and is the intellectual property of award entry consultants Boost Awards.
Can AI write a good award entry? Should you use AI to write your award entry?
Chris Robinson
MD, Boost Awards
This article is written by Chris Robinson, MD of Boost Awards, without the use of AI
AI-generated award entries are prolific and increasing. According to recent (late-2024) research published by the Independent Awards Standards Council, awards judges estimated that about a third of entries (the average estimate was 36%) they had seen in the previous 12 months were AI-generated.
Both awards judges and organisers surveyed believe that the proportion of award entries being AI-generated will increase with time.
It is therefore no surprise that I’m increasingly asked: ‘Why don’t I just use AI to write my award entry?’
Having spent a year discussing this topic with awards industry stakeholders, attending a summit for consultancy owners focused on AI risks and benefits, and most recently poring over the findings of the research mentioned, I would like to share an in-depth response to this superficially simple question.
First consideration – do judges care?
So, you’ve spent the last few months or years rolling out a successful project, initiative or strategy, and you want awards judges to hear your story and agree it deserves an award. Should you entrust AI with convincing the judges that you should win?
The first two questions here are… can awards judges spot AI-generated entries? And do they care?
Here are some statistics from the survey I mentioned earlier that answer these questions. Firstly, yes: 78.6% of judges believe they can spot AI-generated award entries due to telltale signs like a ‘lack of soul’ or having too much ‘waffle’ (amongst other indicators).
But do they care? It is a ‘yes’ here too. Surely, some might argue judges should just judge an entry on merit and ignore how it was written? A salient point is that, regardless of the writing style, you need judges to trust that the facts and evidence presented are to be believed – yet only 21.4% of judges agree with the statement: ‘I trust generative AI to be able to accurately represent the reality of the story being entered’.
21.4% of judges agree with the statement: ‘I trust generative AI to be able to accurately represent the reality of the story being entered’
This suggests that the decision to mark an entry down isn’t explicit; but in many cases, this decision will be arrived at consciously. It so happens that judges are split down the middle, 50/50, on whether to mark down an AI entry just because it is AI-generated. A minority, meanwhile – 14.3% – stated that they would like to see such entries disqualified altogether.
I’ve also spoken personally to a number of awards judges, and it is clear from those conversations that they would prefer to read human-generated content that reflects the passion and authenticity of the person nominating the project/person/team/business. Their bias, whether intentional or not, will favour the human writer when picking the winner.
Second consideration – do awards organisers care?
Surely it’s irrelevant whether an awards organiser cares about submissions being AI generated or not? The organisers aren’t (or shouldn’t be) the judges, after all: but they are the gatekeepers and rule-setters.
Although many awards organisers are ambivalent about AI-generated entries, the majority – whatever anyone else thinks – are against them.
It might come as some surprise to learn that 31.3% of organisers surveyed already filter out AI-generated award entries (either using people to spot them, or technology such as GPTZero, Grammarly, and Originality.aI).
An increasing number of awards platforms are building this vigilance into their systems and offering organisers the opportunity to enable filters. Furthermore, it looks as though like the latter will be doing just that: 65.6% of awards organisers stated that they would use tech-based filters if their awards platform supported it.
31.3% of awards organisers surveyed already filter out AI-generated award entries
We would encourage all awards to be transparent, if this is the case – for entrants and judges alike. And, sure enough, most of the awards that already apply filtering, have a written policy to this effect, or are planning to introduce one in their next iteration. An early example of this is a stipulation in the Learning Awards, which states: ‘Submissions relying solely on AI-generated content will be asked for resubmission.’
To conclude this point: yes, you could get AI to generate your award entry, which would be quick, easy and cheap; but there is a risk of having to resubmit or be disqualified. You would essentially need to comb over the entire AI-generated draft to ensure it is correct, reflects your voice and perception of the truth, AND doesn’t contain so much as a sniff of AI. You might as well write it from scratch, or dictate it to a skilled writer.
Third consideration – will AI add value?
From what I have seen, the output received by giving an award entry job to AI is not a million miles away from the output received by giving the same job to an intelligent and obedient English Degree graduate or journalist. OK, AI will turn it around a lot more quickly and cheaply, but within the same ballpark in terms of linguistic quality.
However, having spent literally decades writing award entries and training up colleagues at Boost, I can tell you that award entry writing is far, far more than simply documenting the material provided: and no, you cannot just drop a graduate into a senior consulting job and watch them fly.
Finding the right contacts to interview, asking the right questions, pushing back against poor answers, finding a winning angle, deciding the scope of the story, sorting the ‘wow’ from the ‘OK’ in the details, identifying the right evidence points, spotting the gaps, making tough decisions on additional research to conduct, identifying what to cull and what to keep… these are all skills that come with time and human understanding of context and nuance (aspects often cited as shortcomings of AI).
65.6% of awards organisers stated that they would use tech-based filters if their awards platform supported it
People who are highly skilled at using AI would argue that you could give the AI or graduate some examples of what a great award entry looks like: but even so, they will only do the best with what they are given.
Analogy time:
If you gave poor-quality ingredients to a robot cook and said, ‘Bake a cake with this’, it would do as it was instructed, maybe even checking what type of cake you wanted or deciding upon the best type of cake to produce with the ingredients it had been given; and it would then create a technically perfect cake that would tick the boxes. BUT, would it be good to eat? It would most likely only be as good as the ingredients, because the robot chef can’t taste the cake or challenge the quality of the ingredients… and the same is true for award entries. Award entries are meant to present the facts, yes. However, the craft lies in filtering the boring facts from the compelling facts, deciding on an angle that will engage the reader, deciding the killer stats that might evidence outstanding outcomes, and deciding what is missing. Has an AI ever said: ‘You know what? The research part of this story is weak’? No.
Next consideration – can you trust AI to be accurate?
As already mentioned, awards judges and organisers do not trust AI. But should you? Generative AI relies on maths, source data you supply, and deep pools of other data to decide which words follow on from the previous words. Each word choice is based on probability. It cannot tell if it is accidentally missing the context, sharing a bias, or misinterpreting the source material. It often takes itself down blind alleys, and (especially if doing its own research) too often generates complete nonsense – referred to as ‘hallucinating’.
Your award entry might be entirely accurate, but you will need to double-check every fact and stat, often having to go back to the source material to be entirely sure you have included all the facts and represented them accurately. In truth, you might as well do it yourself, or get a person you trust to do it for you.
As The University of Cambridge website so eloquently puts it: ‘Oops! We Automated Bullshit: ChatGPT is a bullshit generator’. Marc North of Durham University, at the conference I recently attended, said: ‘AI has no desire or requirement to tell the truth’.
Furthermore, if an awards judge has had an experience of AI hallucinating or getting it wrong, they will then harbour an ingrained bias against AI content. Therefore, if they suspect your content is AI-generated, you could fall foul of the first consideration here.
Another consideration – can it do harm?
With any business decision, you have to ask yourself: ‘What’s the worst that can happen?’ Well, if you thought the worst that can happen in this context is that you get disqualified from an awards programme, there actually is worse.
Firstly, if you are using AI without declaring it, a common issue is that if people ask: ‘Why did you write it like that?’ or ‘Why did you include this and not that?’, you will have no clue. You cannot go back to the AI and justify its decisions. What we keep finding is that it chooses waffle over data points, even when asked not to. It will often ignore the most important points to include, and you’ll consequently be made to look bad by not having good answers to challenges about editorial/narrative decisions. Then you have no choice but to admit you used AI; and that’s the last time you and your preferred AI tool will be asked to do that task.
For another thing, AI uses previous content to generate new content – and this can mean your supposedly new content might reflect unintentional bias or prejudices. For example, images are being criticised as overly sexualised, and assumptions about gender and race in society often creep into content.
Even worse, if you don’t use an enterprise AI licence, then everything you process is in essence the sharing of potentially confidential content with a third party. If you are under a privacy policy or non-disclosure agreement, then the act of using the online Chat GPT or equivalent is a clear breach of any policy, and the content you shared is going to be stored on a third-party server and used to train up future content.
And finally, worst of all, is the subject of intellectual property. You might find yourself accidentally infringing upon someone else’s copyright (multiple lawsuits are currently going through the courts), with your final content belonging to… no one knows! I asked a panel of AI experts, ‘If I use AI to write an award entry, who owns the IPR of the final content?’ and no one had a clear answer. It is still the Wild West.
The green and pleasant land of AI-generated content is not so pleasant after all. It is actually a minefield of risks and untested, un-litigated legal woes for the immediate and long-term future. Enter at your peril.
The final consideration – will it win?
If you are writing a press release or case study, then an average piece of content is arguably good enough. But if you are up against the best in the industry, your entry has to make your story sound worthiest of winning; and that is not easy. It isn’t enough to ask, ‘Can AI write an award entry?’ You have to ask: ‘Can AI write an award-winning entry?’ The answer to the first question is clearly a ‘yes’, but the answer to the second question, given everything you have read here, is ‘unlikely’. On top of this, I would argue that this will become increasingly less likely as more and more people start using AI to generate their run-of-the-mill award entries and AI-generated award entries becomes the baseline. I predict that we will then revert back to bringing humans into the mix more, this time to steal a march on the bots.
Conclusion
I do think that AI will revolutionise and disrupt many industries – including awards. We are using AI to automate routine processes, capture call notes, analyse datasets and other aspects that help with the day job. Interestingly the survey mentioned earlier suggested that awards judges and organisers are both fine about this. But when it comes to the point where the rubber hits the road – and judges’ eyeballs read our clients’ stories – that is where AI’s role becomes arm’s-length.
My reasoning goes beyond the rational arguments presented here, and towards a deep human need for trust and authenticity. I am convinced that as the line between ‘real’ and ‘realistic’ is blurred by AI, there will be a growing desire for help in spotting authentic human-generated content across all media. Awards platforms will add AI-spotting filters, and publishers will increasingly advertise AI-free content (in the same way as GMO was meant to revolutionise agriculture, but instead we saw ‘No GMO’ logos everywhere). The cynicism about whether content is authentic or not will undoubtedly bleed into the realm of awards judging.
AI will undoubtedly get better and better at generating content, including award entries, but it will only ever be an imitation of this very human craft, only as good as the content it is given, and the next word the algorithm posits should follow the previous word. I side with the awards judges and call for award entry writers to keep it real. Not just realistic.
Boost – a helping hand entering awards
Boost is the world’s first and largest award entry consultancy, having helped clients, from SMEs to multinationals, win over 2,000 credible business awards. Increase your chances of success significantly – call Boost on +44(0)1273 258703 today for a no-obligation chat about awards.
(C) This article was written by Chris Robinson and is the intellectual property of award entry consultants Boost Awards.
Looking for awards to enter?
Sign up for our free email deadline reminders to make sure you never miss an awards deadline. Every month you will receive a comprehensive list of upcoming awards deadlines (in the next two months) organised by industry sector.