Ok, f*** it, let's talk about AI

Why uncertainty makes the discourse so terrible - and reading recs!

Andrew Griffin, in his (must-read-but-hard-to-link-to) IndyTech newsletter, referred to the UK’s recent “AI Summit” as “the usual combination of apocalypse and bromides”. I think this language aptly describes any discussion around AI at this point.

As the requisite cyberpunk guy, I took part in a panel about the impact of AI on publishing while at World Fantasy last week. I am pleased to say that the panel had overwhelmingly sensible panelists. That said, the conversation still sank, inexorably, into the expected conversational mire that is the Discourse. We decided, in short, it was very scary and Someone Should Do Something About It.

My frustration around panels like these - and, indeed, the Discourse as a whole - stems from the uncertainty. Not only are we unclear about what the future holds with regard to AI, but most of us are entirely powerless to do anything about it. As a result: apocalypse and bromides.

In the absence of having any significant agency over our own greater destiny, much of the conversation around AI has gravitated towards what we can control: our own behaviour. For the panel at World Fantasy, the questions from the audience were about what is permitted for the individual author. Should I use an AI voice for my audiobook? Is it ok to get story prompts from ChatGPT? These are reasonable questions, and worth discussing, but they also highlight how little say that we, as individuals, have over this seismic, systematic shift. We’re stuck arguing over the nuances of our own behaviour. We have vicious debates over how we should behave, and build moral molehills to die on, but ultimately our choices are limited to how we allow ourselves to interact with the tools that are being foisted upon us.

An unintentional, but excellent, example came up on the panel. One author on the panel introduced himself as ‘blah blah, or “b.b. blah”, so that I can be found on Google’. That is, thanks to a massive algorithm (hi, AI), the author wasn’t visible online. So the author adapted in the only way he could: he changed his name. Very clever. A bit grim. This, right here, feels like the future most of us will be facing: finding ways to tweak our behaviour or ourselves (even our own names), to in order to survive in a system that has already begun its irreversible transformation.

So what do we do?

First, we must be kind to each other. We’re all in the same situation, trying to figure out what does and doesn’t work, while somehow keeping upright on a tilting, uneven, and maniacally-spinning playing field. We’re all making the rules up as we go along. Don’t be a dick.

Second, seek, where you can, the increased agency of collective action. Just sticking with publishing as a microcosm: 81% of authors aren’t full time writers. Like many other creative trades, authors rely on additional sources of income from other fields. Those fields are also impacted by AI; perhaps far more significantly (at least in the short run) than publishing. Where being part of a collective is possible, it is well worth considering. Where it isn’t possible, think about starting one. I’m very glad to be part of a union. And good luck to you, and us, all.

We’re an adaptable species, but that doesn’t mean that the process of change is always pleasant one. Be good to, and stand with, one another.

I, Robot: Amazon.co.uk: Books

Instead of droning one about REAL AI, let’s have some reading recs for FUN AI. Here are five shorter works that have more to say about AI than a thousand-thousand LinkedIn posts (or high-powered ‘summits’):

“Runaround” by Isaac Asimov (1942): The first of Asimov’s stories with his famous ‘Laws of Robotics’. I’m going to sound like such a grump, but I think it is worth reading something like this purely to understand how this has all gone so very, very wrong. There’s a fundamental naiveté here - perhaps a charming one - that goodness can be hard-coded into technology; making adorbs robo-slaves that will do our bidding in cutesy quirky ways. It is useful to read because I think this bias (adorbs robo-slaves; easily puzzled out) still permeates our thinking. The reality is, of course, much messier and lacking a singular moral backbone.

“Desert Rain” by Pat Murphy and Mark Van Name (1991): This deserves a longer mention, as this story has been living rent-free in my head for over thirty years. It is about an artist who, reluctantly, beta-tests a new household AI (think Alexa, but more useful and less inclined to shill you Amazon Music Unlimited). Her relationship with the - extremely humanised - AI is first awkward, then comfortable, and finally, perhaps, too comfortable. The story directly considers our relationship with humanised technology, and explores the limits of - and meaning of - that humanisation. Do we need fellow humans not because humans are ‘better’, but because we’re ‘worse’ - unpredictable, unhelpful, and challenging?

“Catching Crumbs from the Table” by Ted Chiang (2000): On the adaptability of humans. Even when we (inevitably?!) become less than our creations, we’ll still find a role: one that challenges us intellectually and moves us forwards (albeit at a slower pace). Like many of Chiang’s stories, this is bittersweet. We do have a role in this new universe, but is it a meaningful one?

“AI and the Trolley Problem” by Pat Cadigan (2018): Not much to add, really. I’m not fundamentally against Asimovian AI-as-logic puzzle thinking, and think Cadigan has adapted and modernised that mode. But Cadigan’s a badass.

“Thoughts and Prayers” by Ken Liu (2019): An absolutely heart-breaking story about how a grieving mother uses AI to bring her murdered child to (virtual) life. But rather than assuaging her loss, or bringing about meaningful social change, she simply fuels another battle in the ‘culture war’. It is a story less about the ‘adaptability’ of humanity, and more about our capacity for pettiness. Group identity, or simple venality, will always find a way. There’s something even sadly reassuring about it: humans gonna human.

“Ghosts” by Vauhini Vara (2021): Like the story above, this is collected in The Big Book of Cyberpunk. It is, in fact, the closing story, and I have a lot to say about it in the book itself. “Ghosts” uses AI as a tool, as the author processes the loss of her sister by starting a story and letting a language model finish it. It works in increments, with the ‘ghost in the machine’ attempting to describe a life (and loss) one step at a time. It is less about the story than the process, with a human - palpably struggling with the ability to express the inexpressible - finding a way forward.

t56yyyyyyyyyyyyqwaiok23we ← contributed by my cat walking across the keyboard

The Big Book of Cyberpunk by Jared Shurin

Reply

or to participate.