Want to chat? What bots can teach think tanks about connecting with the public

6 May 2020
SERIES OTT Annual Review 2019: think tanks and technology 14 items

[This article was originally published in the OTT Annual Review 2019-2020: think tanks and technology on March 2020.]

Hey there [your name]. How are you doing?

Would you like to read this piece on bots and thinks tanks, or would you rather look at this incredible video of a train passing through a market in Bangkok?

Articles that ‘chat’ like this one are not the norm. But they do already exist, built with simple coding. What remains to be seen is whether our social skills have evolved at the same speed.  Are we ready to present our ideas in a conversational manner?

Conversation is one of the most powerful human experiences. A great conversation can change the course of a project, a relationship, or even a life. That is why recreating the experience when we talk to our think tank audiences is so promising.

You might have heard that algorithms and bots learn to chat by looking at human conversations. And that’s correct. However, I’d like to turn that idea around and ask: what can we learn from bots about human conversation?

Ambiguity, suspense and reassurance

Last year I met Emily Withrow, who was Director of Quartz Bot Studio at the time. She told me that while creating conversational bots to bring journalists and readers together, her team discovered what they called ambiguous emojis.

When bots don’t know how to respond to a reader’s question (sound familiar?) ambiguous emojis come to the rescue.

For example, if a user says to the bot: ‘You seem quite silly’, it can reply with the ‘doing nails’ emoji.

It’s a way of saying, ‘I don’t care if you bully me, it won’t affect me’.

Now if another user says: ‘Bot, you are a genius’, it can reply with the same emoji.

But in this case, it will look like, ‘Yes, I’m super, and I know it!’

Ambiguity is a human tool. A fantastic narrative mechanism. Pure suspense. Sometimes we don’t quite understand what someone is trying to say to us, and that keeps us wondering, it may even keep us awake at night.

Suspense is a central part of conversation on chat apps. The famous three dots … waiting … we know a message is in the making. When we build bots for think tanks, we always add those three dots before sending the bot’s responses – even though computers can reply immediately. The three dots create the illusion of a conversation with a human being.

The three dots are also a reassurance that some entity – natural or artificial – is really on the other end of the line. The equivalent of looking at the other person over the table in a meeting or asking ‘are you there?’ over the phone when we hear that suspicious silence.

Human to human communication also badly needs those reassurances – the confirmation that we are being heard. When Google presented a prototype of its virtual assistant, it showed a conversation between the bot and the receptionist at a beauty salon. The moment the audience celebrated most was not when the bot sorted out the best slot in the agenda for a haircut appointment, or when it understood the subtle difference between pedicure and nail repair. It was when the bot said ‘mmm’ to indicate to the receptionist that it was listening to her.

How many think tanks know how to say mmm to their audience to let them know they are listening?

Using bots to talk to think tank audiences

At Sociopublico, a communications studio for complex ideas in Argentina, we have been testing bots as a tool to reach the public. We are looking for new ways to say mmm to people, in the hope that it can help them stay connected with our messages longer, at a time of attention scarcity.

We have built three bots: one to ‘test yourself as the Economy Minister of Argentina’ (only in Spanish, English speakers might use the restriction to quickly run away); another one with Google to help users spot misinformation (English version coming soon); and a final one with Cippec, PwC and Brookings on the future of politics, to guide users in the quest to learn how politics will look like in 2050 for you.

What have we learned about humans by building these bots?

1. We can chat for a longer time

Audience analytics tell us that people stayed three to five minutes talking to the bots, something very difficult to achieve with plain text or video infiltrating our audience’s social media feed.

When we are chatting, we tend to stay. Conversations (with bot or human alike) keep us in the moment, encourage us to keep on participating.

2. We want to go deeper

In beta testing, users asked us for more complexity and detail in the information the bot provided during the conversation, or when offering its final findings. It was like heaven for us knowledge communicators.

We tried really hard to keep the experiences short and simple, even though working with complex content. And here were users demanding more detailed and sophisticated messages.

That seems like another advantage to conversation – once people have allocated time to the experience, they find space to dig deeper.

In the project on the future of politics, that feedback led us to build an extra product: a  scrollytelling to explain the fundamentals of the paper behind the bot, linked to the different results the bot offers.

3. We like winks of complicity

Conversations allow us to build complicity with the audience by using a traditional communications tool: good copy.

For example, our bot on the future of politics asks users where are they from. If you say Rio de Janeiro it will reply:

Rio de Janeiro? Lovely in the summer.

But if you say London it will equally say: London? Lovely in the summer.

We expect users to notice that no matter their reply the bot would say the same, and to smile a little at our subtle mocking of the inflexibility of bots.

The bot on the Argentine economy lets you select an avatar. If you choose ‘gatite’, the whole conversation will use gender neutral language, without announcing it.

These are simple and superficial winks, but they can help connect with the intelligence and the sensibility of others.

Before leaving the (chat) room

Bots seem to be helping us to share more time, detailed content and complicity with our audiences. These are features we expected from a good conversation, but bots have allowed as to corroborate them, to measure their effect and to use them beyond one-on-one conversation.

In the meantime, a final tip learned from this: the next time someone ask you for something difficult, you can just reply  with the doing nails emoji.