Algorithm and Blues

Image copyright Nick Gentry ©

On the eve of the election, two things I have read this week have combined in my head and I have not been able to stop thinking about them. The first thing is the excellent comment that Dave Freer left on my post earlier this week. The second is this video by music critic Chris Weingarten. The subject of these two influences – or at least the tenuous connection I have built between them – is the conflict between the benefits of technology and the tyranny of numbers.

OK, so even to me that sounds a bit dramatic. But it is true. I’ve touched on this topic before in a previous post, and I came to the conclusion that optimisation of artistic expression by algorithm may well be possible, and even useful, but it’s really bloody depressing. I still feel this way. I was at first skeptical of Dave’s explanation of how mathematical modelling of book acquisition could be possible, but he convinced me. Snip:

At the moment, you have your gut feel and the bookscan figures to decide what you buy. If you had better quality data (ie. laydown, returns, normal sales of that sub-genre and laydown within each geographic area … you could say which … would make your company more money, which had the lower risk, what was actually a reasonable ask for the books in question. It could also tell the retailer which were good bets for their area, and publisher where to push distribution. It doesn’t over-ride judgement, it just adds a tool which, when margins are thin, can make the world of difference.

I am forced to agree with Dave that if such a tool were available it would be of great use to publishers to help decide what to buy, and in a great many instances, what books would sell (if you still don’t believe it, I recommend reading the whole thing). Nonetheless, it fills me with despair. As Weingarten says in the video I linked to, most of us who got into the world of writing did so because we suck at maths. But it’s not just that. There’s a kind of ethical issue at stake here too. The availability of a tool like this would make publishers lazy. I once heard the use of test audiences for TV pilots and films described as being more about ass covering than actually predicting the success or failure of a film. And I have to say the same thought occurs to me about the statistical modelling of book acquisition.

This is not to say the information wouldn’t be useful, but it would mean that when a book that tested well in the model bombed, publishers could throw their hands up in the air and say, “Well, it tested well.” It would be a tool that sales directors and corporate executives would use to dampen creativity in publishing. Presumably (though correct me if I’m wrong, Dave!) the sales of statistical outliers that don’t easily fit into a pre-existing genre or sub-genre would not be easily predicted under this model. And there are a lot of books that don’t fit into genres. I’ve heard it said that when it comes to books there are almost as many genres as there are books. Does that mean that publishers would just use their own judgement? Or would they be even more unlikely than they already are to take on books that aren’t safe bets?

Of course, Dave will probably tell us that this amazing statistical model would only be a tool. It wouldn’t ‘override judgement’ as he says in the quote above. But humans like to rely on machines and numbers – especially when it comes to difficult decisions. Sometimes that comes at the cost of something difficult to quantify. And perhaps on this day, when the leaders of our country are trying to win an election based as much upon the statistically predicted thoughts of a few key voters in a few key marginal seats as any true leadership, beliefs, policies or moral character, I fear that ceding our decision-making to an algorithm has the potential to take away far more than it gives us. What do you think?