Get our free email newsletter with just one click

James Doeser: Is Arts Council’s plan to measure quality £2.7m well spent?

Photo: Olivier Le Moal/Shutterstock
by -

As a researcher, you might imagine I’d welcome the news that Arts Council England has set aside up to £2.7 million for a grand exercise to measure the quality of the work it funds. After all, this is a huge endorsement for research in the service of arts funding: the commitment to seek an objective and transparent way to account for how ACE spends money and to inform where it goes in future.

Nonetheless, it’s fair to say that the chunk of the research community with its integrity intact views the development of the quality metrics with a good deal of scepticism.

For a start, £2.7 million is an incredible amount of money to set aside for research and evaluation. Two. Point. Seven. Million. Just let those words rattle around your mouth a bit. Someone is going to get one hell of a payday.

In Theatreland, that sort of money would cover the running of a mid-sized theatre company for a couple of years. It would get you four tickets to Hamilton and (according to recent estimates) would buy you one 36th of Andrew Lloyd Webber.

This investment comes at a time when the Arts Council has been spending lavishly on research activity, including a review of the theatre sector. Now it has set aside a record sum to tackle the hardest challenge of all, the research holy grail of arts agencies the world over: a systematic and robust measure of quality that works across a mixed portfolio of art forms, organisation types and audiences.

Late last year, I hosted a webinar on the subject of ‘Quality Metrics’ – the name of ACE’s assessment project – bringing together an international set of voices that sought to interrogate the approach.

What we learned (and what comes through in the evaluations of pilot versions of the approach) is that much of the Quality Metrics lies not from the final numbers, not in the stack of survey data about this or that performance, but the simple fact that it brings together different elements within arts organisations to have a single conversation about quality.

I had naively assumed this is what organisations did as a matter of course. How else could they keep the show on the road without incessant conflict? I started to get worried that ACE is about to buy a multimillion-pound ice breaker.

It’s worth reflecting on where this originates. In less than 30 years, the research questions at ACE have gone from ‘how many performances do we subsidise?’ to something like ‘what is the aggregate level of excellence that we invest in?’.

Sure, there have been improvements in research method and theory that allow for a more sophisticated approach, but ultimately the forces that have driven this change are political. In the old days, ACE and the organisations it funded justified the use of taxpayer money with a patrician’s arrogance: we fund it because it’s good, and we know it’s good because we say it is. These days it seems the arts world can no longer escape the responsibilities of objectivity and accountability that have revolutionised other parts of the public sector.

We are all still living with the profound impact of Achieving Great Art for Everyone: the grand strategy that integrated the cash machine and the development wings of ACE.

No money leaves the building without the recipient demonstrating how it achieves ACE’s goals. It doesn’t matter how special your performance is or how flash your production will be, ACE is not in the business of subsidising shiny things, it now makes investments. And the return on that investment is measured against the delivery of those five strategic goals: excellence, reach, resilience, diversity, young people.

The unspoken truth is that there is a hierarchy of goals, with Goal 1 at the top. One goal to rule them all. It was the only one that really mattered in the days of ACE-the-cash-machine.

Measuring against Goal 1 was always going to be tricky. Appraising the quality of arts and culture is a heavyweight philosophical undertaking. And the current generation of consultants and arts administrators are just the most recent in a long line of people to have a go.

Our understanding of these issues had developed since Plato and Aristotle thrashed out the fundamentals of aesthetics; centuries later Kant searched for some firm ground to base his subjective impressions upon. Taking a social turn, Walter Benjamin and colleagues at the Frankfurt School tried to make sense of culture and aesthetics in the mechanical age. Later, Pierre Bourdieu and others proved that in the post-industrial society many people use their consumption of culture to signal their belonging to this or that social class, rather than as a true expression of their instinctive preferences.

As you read this there are people seeking to understand where art generated by algorithms fits in this system, and whether computers themselves might autonomously be able to appraise the quality of such work. That is some auspicious company for us mortals to be keeping, and should indicate that this undertaking requires the deepest of deep thought.

What will this mean for the future? The Arts Council is going to use these ratings as a factor in future decision-making. This is consequential, not just an indulgent exercise in data collection.

In the land of Quality Metrics, points mean prizes – one imagines a continuation or uplift in revenue funding. The prospects of a national portfolio organisation could be at stake if their scores are below that of their peers.

Should one NPO fare worse than another in its peer group according to the system, the higher-performing NPO could be in line for a more secure or generous arrangement with ACE. Lucky them. But over time this could have serious consequences for ‘the sector’ more widely and the corpus of culture it produces.

I wrote in The Stage last year that ACE is in some danger of calcifying the already stubborn hierarchy in the subsidised arts and culture sector. Its new, three-layered cake approach to funding will serve to reinforce the position of the big boys, while the less established (or those less reliant on ACE) are likely to be part of the churn within the portfolio.

What is the NPO Quality Metrics score supposed to look like, if everything goes as well as it can? Will ACE exclusively fund the highest-rated arts organisations according to the Quality Metrics approach? If not, then what are those lesser-performing ones doing in the portfolio? It’s not clear to me that the necessary deep thinking has considered the consequences of all this.

I’m still left with big, eternal questions: who is qualified to make judgements about quality? Who is legitimately mandated to make them (not the same thing)?

Do people have the ability to express such judgements (is there the vocabulary and the means to use it)? The fact that two critics can disagree serves to indicate that their critiques say more about them than the show they have both attended. In addition, the likelihood that their tastes will change over time is a reminder of how any measurement system designed to facilitate change and creativity is likely to stumble sooner or later.

It’s great to see more money for research, but will this say anything about the art or everything about our society?

We need your help…

When you subscribe to The Stage, you’re investing in our journalism. And our journalism is invested in supporting theatre and the performing arts.

The Stage is a family business, operated by the same family since we were founded in 1880. We do not receive government funding. We are not owned by a large corporation. Our editorial is not dictated by ticket sales.

We are fully independent, but this means we rely on revenue from readers to survive.

Help us continue to report on great work across the UK, champion new talent and keep up our investigative journalism that holds the powerful to account. Your subscription helps ensure our journalism can continue.