Last week, I was preparing to lead a session of our work book club. We’d been reading Milkman by Anna Burns, a novel that’s famously challenging, darkly hilarious (in my opinion) and narrated by someone whose grip on memory, chronology and conversational accuracy is somewhat questionable.
by Eva Duffy
I had a clear sense of what I wanted to explore with the group: narrative digressions; stream of consciousness and the protagonist's interiority; themes of surveillance and silence and that dark humour.
So, I turned to ChatGPT.
I didn’t fire off “give me ten book club questions about Milkman.” I explained what I found interesting about the narrative devices, structure and tone; why I loved the narrator from the opening sentence onwards and what I hoped the book club discussion might uncover.
Two hours and an immensely enjoyable conversation later, I had a clearer focus and a great set of questions. And it had differed from my usual way of using ChatGPT where I expect a bit of back and forth - “no, ChatGPT, that’s not quite what I meant” - because I’d taken the time to explain the context and my vision for the discussion before I asked for any outputs or advice.
Since then, I’ve been reflecting on how often the problem with communication - at work, especially - is that we assume we’ve said enough: that colleagues will just know what we mean and that our intention is obvious.
In established comms teams, you build up a kind of shorthand. You know the tone, the expectations, the usual angles. If someone asks for a press release, you don’t need reminding about the inverted pyramid model. But that shared instinct doesn’t always work.
Only last week, I asked someone in my team for 200 words for our e-newsletter on a lovely community story. He came back with something thoughtful and well crafted, with a completely different angle to the one I’d had in mind. Both versions were valid. I’d just committed the unforgivable newsroom error of assuming.
ChatGPT doesn’t let you get away with that. It doesn’t nod along or smile politely. It reflects exactly what you’ve told it and does what you’ve asked it. It’s made me think more carefully about how I brief people, how I share ideas and how I give feedback. I’d even go so far as to say it’s made me a bit more deliberate.
And yes, it helped me write better book club questions. But more than that, it reminded me how often we skip the bit that matters most - the context, the intent, the bit where we explain what we’re trying to do.
Which brings me back to our Milkman narrator. She’s an unreliable one, we’re subtly led to believe. But maybe that’s not quite the right diagnosis. Maybe it’s less about unreliability and more about dissonance between how she perceives her story and how the reader receives it. What she leaves out, skips over or can’t bring herself to say – because not saying and not naming is a key theme in the book - creates a space we must fill for ourselves. The gap between her world and ours is where all the tension lives.
And that’s not a million miles away from the gaps we come across daily in our working lives.
The more I work with AI, the more I notice it. Maybe one useful thing about these tools - if we’re willing to use them well - is that they quietly remind us of what’s missing. They nudge us to identify our intent, clarify the ask and explain ourselves better.
Not because the tech needs it.
Because people do.
Eva Duffy is Head of Communications at the Royal Free Charity
*Sign up for the comms2point0 eMag*
The comms2point0 eMag features exclusive new content, free give-aways, special offers, first dibs on new events and much, much more.
Sound good? Join over 3.8k other comms people who have subscribed. You can sign up to it right here