Lots of charitable trusts have paused funding to charities because they have received “unprecedented” (or similar words) numbers of applications in recent months. Of course the needs of people don’t stop; and the expenditure of the charities that help them can’t be paused.
Jo J maintains a Trust and Foundations List – just check out how many are paused, and how frequently “numbers of applications” is the reason. It’s a lot. This has been in the news a fair bit recently too.
Funders have always received more applications than they can support. Demand for funding exceeding supply isn’t new. Clearly Local Authorities still don’t have any money for the voluntary sector, the economy isn’t great and cost of living is still expensive: other sources of funding are still scarce, but we’re not hearing that they’ve stopped dramatically in the last 6 months. So why have all these funders received so many more applications?
The Civil Society article suggests some reasons, and I’m sure there’s some truth in them (especially the domino effect). But what’s not mentinoed, and my guess: AI. Availability and awareness of highly capable generative AI (ie Chat GPT etc) is what’s changed in the last year.
Our last recruitment round was inundated with applications that were strikingly similar to one another, and to the response to our application questions generated by Chat GPT.
The Paul Hamlyn Foundation has felt the need to put guidelines on their website. A choice quote from those:
Our view of AI in applications
As the use of these tools has started to become more common, we have noticed that the similarities in language and the points being made can make it more difficult to understand what is different or special about an applicant, or the work they are describing. This is likely to disadvantage an application – whether for a job or a funding application.
For the record, I absolutely share their broad concerns.
This is just a guess on my part – I have no insight into the applications being received by all these funders that are having to take a time-out, or how they’ll respond. But it feels like a reasonable guess, especially given the emergence of AI tools specifically to help write funding bids.
So how does this play out? Presumably different funders will take different approaches. Here’s some plausible approaches
Just Trust
As part of the application process you’ll already be signing something saying it’s true and you have authority to apply. Will you also have to confirm that you didn’t use AI to complete the application? But how far does that go? Not at all? Only for a first draft? What about AI tools that check grammar and clarity? Will they still need to use…
‘AI Detectors’?
Maybe funders will start adopting similar tools to those being used by universities etc. to detect AI essays. Like PHF, they’ll ask you not to use AI tools, and your application will be passed through an (AI powered?) AI detector before it’s seen by a human. And then welcome to the technology race as AI tool-makers seek to improve their models to beat the AI detectors… and so it goes on.
AI Assessment
AI tools will be developed to assess applications against the funder’s criteria – a ‘two can play at that game’ approach. The first round of assessment will be automated and you’ll get “computer says no” responses, whether you used AI or not. Can AI be taught to disfavour AI generated applications? Or will it have a hard time not finding them more compelling? But there may not be enough training data to actually do this.
Limited Access?
Or perhaps funders will judge that it’s only manageable to fund organisations they have previously funded, or to proactively seek out groups to fund. Like many smaller funders, they won’t accept unsolicited applications. So newer groups will be frozen out, and those who happen to know a trustee or two, or have some other ‘in’, will likely be at a considerable advantage. Charities will need to replace trust fundraising with marketing and PR staff to raise their profile with these funders.
Follow the Data
This is a radical one: funders will largely forgo words. Your application for funding will be something like this:
1. Charity Number: (they can look up the rest, and accounts etc. from that)
2. Impact of our funding: (e.g.) 50 people move from score of 4 to score of 6 on A Wellbeing Scale.
3. Location of project:
4. Categories of beneficiaries:
5. Amount needed:
6. Amount sought:
You say what impact you’ll have, how much it’ll cost, and enough information to checks against their criteria (location, beneficiaries). What you do with the money follows afterwards. Funders fund impact: you, the expert, judge what to do to best achieve it. There are some pretty obvious downsides to this type of approach – most of all the lack of human connection in what is fundamentally a human thing.
So What Next?
For now? Judging by PHF’s statement, you need to stand out, and the way to do that is being authentic, being human, communicating in your voice, not the fine words that AI tools produce. But it seems clear to me that, if it is AI behind all this, it’s caused real damage to the sector, and those people it helps, by overwhelming funders and causing them to close their doors.
Do you have any thoughts about which way funders will go? Or perhaps it’s not AI at all – if not, what is it?
Thanks to (1) Jo J. | LinkedIn, (1) Alex Hayes | LinkedIn, (1) Alex Evans, PhD | LinkedIn and plenty of commenters for insights around this. And Photo by Melissa Walker Horn on Unsplash – thank you too.