Reputation matters when sourcing from the crowd

placeholder-2-60
by -

I’ve long been a fan of the website Theatremonkey, which provides a comprehensive summary of seat quality – from legroom to the view of the stage – thus allowing audiencegoers to decide whether it’s worth the ticket price to sit there.

For a long time, this was pretty much the only place you could find such information online. Earlier this week, though, a second website, SeatPlan, launched to achieve a similar goal. Both sites take slightly different editorial approaches, and sitting the two side by side helps to illustrate the differences, advantages and flaws of the approaches that can lead to thoughts far beyond the sites themselves.

The long-running Theatremonkey is a curated list, the site’s owner taking reports submitted by its users and fashioning them into brief reports that often cover segments of an auditorium in a single, pithy paragraph, such as this for the Aldwych Theatre, currently playing host to Top Hat:

The side blocks of rows J to X feel like satellite colonies for some reason, and the odd viewing angle is mildly irritating. For most productions, monkey feeling is that the extreme ends (first and last 3 seats in row J back) are the third choice of ticket, as there are more central seats for the same price, and the overhang is most noticed here. Lighting may also be hanging in view from the circle above.

This is the old, ‘traditional’ way of what’s become known as crowdsourcing – a single editor (or team of editors) sifting through submissions to craft a narrative. This approach only works if the readers trust the editors to be balanced and honest in their summations. Over the years, Theatremonkey has proved that such trust is well-placed.

The newcomer site, SeatPlan, promotes itself as a “TripAdvisor for theatre seats”, although I find the comparisons are probably closer to SeatGuru than the main TripAdvisor site. SeatPlan’s approach cuts out that intermediate layer of editing. Visitors to the site get the raw data, or electronically aggregated combinations of the raw data. In doing so, that reliance on an intermediary to curate the information is gone – you can see all the submitted reports and make up your own mind. There is no editor to trust, or distrust – you become your own editor.

Of course, that means that you then become responsible for casting aside false or misleading information. By virtue of seeing so much incoming data, a central moderator can quickly identify the seating equivalent of “spam” – say, if a mischievous marketer or misguided “fan” were to post exaggeratedly optimistic opinions of seats in order to make a theatre and show seem better quality, and more saleable, than it should be. If any crowdsourced data collector steps away from that moderation role, it then becomes incumbent on the site visitor to identify which reports are fair and accurate, and which are not. And it’s hard for site users to gain the depth of knowledge to make such identifications in the consistent, reliable manner that a centralised system can provide.

On the other hand, a fully crowdsourced data collection and distribution site can, in theory, handle far more information than the relatively expensive, labour-intensive “traditional” method. It’s early days for SeatPlan, and I hope that it manages to avoid the sort of pitfalls that can limit its true potential. If nothing else, having two separate sources of seat information makes it easier for us theatregoers to make our own decisions about the relative value of tickets.

Of course, SeatPlan will only work if the amount of data it allows its users to sift through is sufficient. At the moment, checking up on a few theatres I hope to visit in the next few months, I found it hard to find any seat information in the areas I was looking to sit. A number of queries about theatre rows produced pages with no information.

[pullquote]If you don’t know the reviewer, does it really help knowing that 83% of a whole load of other people you don’t know liked their review?[/pullquote]

I have to say – and this is just my personal opinion – that the current site design puts me off using SeatPlan. Every new site has to start somewhere, of course, but it feels like the design places an (unintended) emphasis on how little information it currently has. It’s a design that I can see will work well once the site has been populated with reviews from around every West End auditorium. But I do think that a dataset that is small, but growing has different user experience demands than a fully comprehensive one. Imposing a design intended for the latter on the former is frustrating, to say the least. And if a prospective user is frustrated by the site, the incentive for them to submit their own seat reviews is diminished – and if that happens, the ability for the site to reach critical mass diminishes also.

When dealing with large numbers of contributions from a wide range of contributors, the issue of who you can trust also comes in to play. Any crowdsourced website has to grapple with the problem of user reputation – that’s why reviews on sites from Amazon to iTunes ask users to rate the quality of the reviews. This is a crude way of trying to identify the reliable reviewers over the unreliable – and not, I think, a terribly effective one. If you don’t know the reviewer, does it really help knowing that 83% of a whole load of other people you don’t know liked their review? Did they like it because it confirmed their own views, or challenged them? You have no way of knowing.

In real life, we give varying degrees of trust to different people in all sorts of ways. Take theatre criticism as a practical example: you may trust a few people who sit well outside your social circle – professional theatre critics, say – to influence your own views. But you may also have family and very good friends who, while not being as experienced and knowledgeable, you know you can trust. The influence a person’s opinion can have on us varies depending not only on their wider reputation, then, but our own opinion of them and their proximity to us.

These are the sort of value judgements we make every day, often subconsciously, to determine who to listen to and what weight to give each incoming opinion. Online, the opinions can come so thick and fast that we don’t have time to process them as effectively in the same way. Algorithms can help – for example, looking at somebody’s social media connections can give clues. It’s likely that if I follow someone on Twitter and we regularly converse there, their opinion is going to influence me more than someone who I don’t have a social connection with, or someone who I follow but don’t ever RT or reply to. Even then, does someone who I regularly converse with about comic books mean that I should trust their opinion on theatre? Maybe, maybe not. Or, back in the realms of theatre seats, should I trust views on legroom from a friend who is 6'5" more, or less, than one who is 5'2"?

No crowdsourced web application has yet cracked the difficult issue of reputation management. And whoever does will probably make enough money to be able to afford any seat in the house – even those ridiculous premium ones.

loading...