How to Think About AI: A Guide for the Perplexed

 

Richard Susskind

 

£10.99, Oxford University Press

 

★★★★✩ 

You need to think about AI. Yes, you. And not just about what it can do for your document management processes or whether it will put your children out of a job.

We’ll come to the reason presently.

But who is Richard Susskind to suggest *how* to think about AI? Wasn’t he the chap who, 15 years ago, forecast The End of Lawyers? And why should we shell out for yet another Susskind volume – his 12th – when we already know what he’s going to say?

Both snipes are unfair: Susskind’s ‘the end’ prediction came with a large question mark. And, while regular attendees to legal tech events may spot a couple of familiar anecdotes, the author’s ruminations from four decades of thinking about AI take him to a conclusion they will not necessarily expect. 

How to Think About AI A Guide for the Perplexed

In what may be his best book yet, certainly as far as a general readership is concerned, Susskind identifies two types of thinkers about AI. These, he calls the process-thinkers and outcome-thinkers. Each is exemplified by a public intellectual. Whether you accept his analysis will largely depend on your opinions of Noam Chomsky and the late Henry Kissinger.

The process-thinker, says Susskind, is intrigued by the operational details of AI; the outcome-thinker is preoccupied with its overall impact. I’ll leave it to you to guess which categories Chomsky and Kissinger fall in to.

Most Gazette readers, if they think at all about AI, are probably process-thinkers. Not because they are fascinated by computational architectures but because a bit of process knowledge – ‘generative AI is simply a glorified text-prediction engine’ – can serve as a crutch for their preferred logical end-point, what Susskind calls ‘not-us thinking’.

Process-thinking also creates risks; most topically that of regulating today’s technology rather than what may be coming around the corner. Here, a dose of outcome-thinking is needed. Not in the sense of AI evangelism but as a way of intelligently balancing the potential risks and benefits of what lies ahead.

That is how we should be thinking about AI. And the reason why? Because, by shying away from difficult questions we are leaving them to be dealt with by technologists and process-thinkers. Are we really willing to bet humanity’s future on that basis?  

 

Michael Cross is the Gazette’s news editor. He has reported on AI developments since the 1980s