Connect with us

World

Monday Morning Moan – if you’re transcribing the real world, you Otter not be using AI that distorts reality

Published

on

Monday Morning Moan – if you’re transcribing the real world, you Otter not be using AI that distorts reality

(Pixabay)

If you think technology is going to solve your problems, then you don’t understand technology – and you don’t understand your problems. Words spoken by an unnamed cryptologist to American artist and musician Laurie Anderson, and shared in a 2022 60 Minutes interview with journalist Anderson Cooper. While those words, wise in the extreme, could apply to any user of technology this century, they urgently need to be taken to heart by vendors too.

Let me explain. In this new AI age, Artificial Intelligence is often force-fed to us, usually in the form of ChatGPT or similar Large Language Model based chatbots or generative systems. Even if we don’t want it, it is there – lurking in apparently innocuous tools, muscling into our careers and daily workflows, sometimes unannounced.

In some cases, that process is (thankfully) overt – so much so that its stupidity is almost endearing. Take Microsoft’s dumb exhortation to write or rewrite every LinkedIn post with AI, despite its platform being designed for human-to-human networking – including by professional writers. We now feel treated with contempt every time we log on to say something flawed or insightful.

But in other cases, AI is being force-fed to us covertly, invisibly, and without forethought, transparency, or disclosure as vendors pursue the technology with an evangelical zeal that feels tactical and profit-led, rather than ethical and strategic. The message that shouts from such platforms is that nobody has considered the consequences of these actions or engaged in thought experiments and scenario planning.

In other words, nobody at these vendors’ strategy meetings has ever said, ‘No’ or ‘What if? And that terrifies me.

The result? A world of completely unnecessary problems that users are left to disassemble and manage for themselves. Disruption, for sure; ‘moving fast and breaking things’, certainly. But unwise.

Dumb future

Take Otter.AI. Like many journalists and analysts, I have long relied on this once-simple, once-focused tool to do something useful: transcribe, in real time, one-to-one conversations, face-to-face interviews, and even entire conferences to an acceptable degree of accuracy (which I would estimate at 85%, depending on the speaker’s accent and audibility).

Otter’s occasional transcription errors have never been a problem, because the original audio is saved alongside the transcript, so the correct wording can always be checked. Simple.

All this data is saved in a cloud archive called My Conversations, which – over time – becomes a searchable library of my own interviews, plus a record of the countless events I have attended around the world, including content that is sometimes privileged, sensitive, and (occasionally) off the record in terms of its reporting.

As a labor-saving app, Otter was thus a rare example of a tool that actually fulfilled the promise of Industry 4.0 technologies – it removed the drudgery from a task; it saved me hours of tedious work transcribing meetings myself, thus freeing me up to focus on being a professional interviewer (at the time of the conversation), and, afterwards, a professional analyst and author; it allowed me to be more present in my own career.

But thanks to AI, it fulfils that role no longer. In my personal experience, at least, Otter has become dangerous and untrustworthy, just by adding AI.

The first sign of this was Otter becoming plagued with AI ‘featuritis’ this year, which has implied new priorities for the vendor. (Some call this process ‘enshittification’. )

Indeed, to my eyes, the platform now seems to have an entirely new purpose from the one I originally subscribed to: it now seems to be about pushing users towards having the world explained by a chatbot rather than by the evidence of our own eyes, ears, expertise, and interviews (stored in Otter).

Troublingly, it appears to assume that I want to share every recording and transcript with chatbots like ChatGPT, and with other tools such as Slack. That assumption alone is worrying as it would suggest that the vendor believes that the default status of my data is open.

On that point, Otter Chat – a new feature, currently in Beta – offers a measure of reassurance. It says: 

Conversation transcripts and chat history are only passed to external providers temporarily when you ask Otter Chat a question, but this data is not stored by the third parties. So, while Otter leverages external AI services for Otter Chat, your conversation data remains private and is not used for training external AI models like ChatGPT.

Even so, Otter no longer seems to be about transcribing real-world conversations for this paying subscriber; it is now about urging me to paste my data, sourced by me in real time, into its AI, so it can explain my own career to me – even if I have been speaking to the world’s leading experts face-to-face.

Training

Indeed, the extent to which Otter has become infected with AI – I will come back to that description later – is worrying. Certainly, it assumes I intend to share my conversations directly with others online.

If I copy a paragraph of text from the transcript of my own interview, Otter now assumes I want to paste it into ChatGPT in the cloud rather than into Word on my laptop. Indeed, its new AI Chat function (it is unclear if that is a separate function to Otter Chat) encourages me to do exactly that, so that the AI’s supposed wisdom can be brought to bear on my private conversations, or on interviews in which privileged data has been shared.

But even that is not what this article is about: it merely serves as the troubling, if predictable, context. After all, I can simply ignore the AI Chat function – and do. And I can disregard Otter’s exhortations to paste my private conversations into its AI, or into ChatGPT, even though I feel jostled, barged, and hemmed in by that intrusive, patronising assumption. (The space in which I operate as an independent, free-thinking human is getting smaller by the day.)

No, this week I noticed something truly alarming, which shocked me to my core. And this revelation came from a different section on the Otter platform: the page labelled ‘Summary’ – which is Otter’s supposed precis of each recorded conversation.

Or at least, I thought it was.

So, what was so shocking that it has undermined my faith in an entire platform? It was the words “1.2 million”, describing the number of developers located in a particular country; they appeared in one Otter meeting summary. (The context was me attending a conference on open-source software and recording the presentations and interviews into Otter, for my own analysis.)

Why was that innocuous figure so shocking? It was because it appeared absolutely nowhere in either the transcript or the audio recording of the conversation as it happened in the real world. That statistic was entirely new data, flown in from an undisclosed, unverified external source, yet credited to a human speaker. It was presented as part of the ‘summary’ of a real-world conversation, even though those words were never uttered by the speaker, certainly not while I was recording.

Where did that statistic come from? It was a very specific and, from my subsequent research, apparently accurate figure, so this was no hallucination. Neither was it a transcription error: the speaker did not say “two million” or “1.2 billion” and was then misquoted; she did not give a figure at all. Clearly, this was additional data from an unknown source.

But what source? Was that source trustworthy? Was it 2024 data, or from an earlier year? And what was the survey base? For a journalist, these are bread-and-butter questions, but in the AI world they are apparently irrelevant. We are simply asked to take what an AI says at face value – partly, one suspects, because the source training data may sometimes have been scraped illegally.

Indeed, the whole ‘summary’ – Otter’s word, not mine – of that meeting read like something written by ChatGPT from whatever external sources were the closest match to the speaker’s presentation. It did not appear to be a summary at all, in fact. So, what was it? An AI confection, perhaps, masquerading as a record of the real world?

This inescapable conclusion is outrageous and unacceptable: Otter’s AI has begun to insert itself into real conversations, to amend them, edit them, re-interpret them, and become an uninvited mediator between users and the real world. It has begun editorializing human conversations.

It has, quite literally, put words into a human being’s mouth, and claimed them as a summary of that person’s real-world utterances.

Why?

Whether the ‘1.2 million’ figure in Otter’s summary was accurate or not is irrelevant: the important point is that it was never uttered by the speaker. So, why did it appear in a ‘summary’ of something she said?

Why are users not informed that the Summary page is nothing of the sort, but rather a chatbot’s version of what should or could have been said, perhaps – and, indeed, has been said elsewhere online?

For a journalist, this lack of transparency, and the active intrusion by an AI into an online record of a real conversation, is a nightmare scenario. Reality is being subverted without the user’s knowledge – and I only discovered this by accident, by checking my transcript – and the audio – and finding no record of the statistic mentioned in the ‘summary’.

In theory, this creates the possibility that false or untrue statements could be credited, in good faith, to individuals who have spoken either publicly, or privately to a journalist. And that fact alone now makes the entire platform unusable as a record of real-world conversations.

Put simply, I can no longer trust Otter to record basic facts, and then present them to me. Instead, I am getting an AI-powered confection that claims to be a precis of a conversation I have taken part in myself. A truly extraordinary and shocking situation – one that is so stupid it beggars belief.

Also, the new and overwhelming presence of AI on Otter’s platform suggests – to me personally – that I can no longer trust Otter with my data’s privacy and security, or that of my interviewees. And that is despite its reassurances that my data is only shared temporarily with an external AI provider, which I take to be OpenAI. 

Plus, I simply don’t want to use gen AI in my work; I don’t need it. Yet I am being force-fed it, covertly. I deeply resent that. It does not enhance my workflow; instead, it is now actively undermining it by making my record of the real world unreliable.

AI has thus infected my work – and reality itself – in the vendor’s mistaken belief that this helps me.

Otter’s response

For the record, I have never used the Summary page in my own writing, for the simple reason that summarizing and explaining a discussion is my job as an intelligent human. After all, I asked the questions. Only I know why I asked them, and what the context was, and what I intend to do with the answers.

More, Otter’s Summary page has always missed both the point and the subtext of a conversation. It doesn’t see the real message and always lacks human nuance and subtlety. Indeed, it invariably reads more like a press release than a record of two people talking. 

So, what does Otter have to say for itself? To its credit, the company reached out to me on X, the platform formerly known as Twitter. Delivered in a private message, its statement in full reads:

Hi Chris – we saw your post [my tweet on X about this] and wanted to reach out. Otter’s AI Summary feature is designed to automatically generate a summary based on your specific meeting or conversation transcript.

That being said, AI may occasionally make mistakes. For this reason, we will update this feature to include a disclaimer similar to what we have with Otter AI Chat and what other AI tools have.

We are also considering a feedback system for AI Summary similar to AI Chat. Our product team would be interested in learning more about your specific issue – would you be willing to connect with them?

(I certainly would.)

The response is interesting, as including completely new (if apparently accurate) data from an unknown, unverified source is clearly not a “mistake” – a mistake would be misreporting data mentioned in a conversation, rather than flying it in from the internet. (The AI must have actively searched for it online.)

Meanwhile, talk of a disclaimer suggests (to my mind) that Otter now knows it has a problem – potentially a legal one? Also, the page it refers to is not called ‘AI Summary’ on the Otter account page at the time of writing, but simply ‘Summary’. Otter’s private message to me, therefore, is distorting reality once again. It asks me to ignore the evidence of my own eyes.

And as that Summary is the first thing I see when I open one of ‘My Conversations’, rather than the transcript, these issues need urgent redress. Otter clearly wants me to read what its AI thinks of my interview first. The interview is of secondary importance to the vendor, but it is the only thing that is important to me. And it is the only important element to any professional user.

(cm)

I replied to Otter with a screengrab from my account and asked them to explain where the additional data had come from, what its source was, and what it was doing in a summary of a conversation in which no such data was mentioned. 

I also asked about its use of ChatGPT and whether my conversation data was being used to train that system – bearing in mind my interviews might be private or privileged.

At the time of writing, I have heard nothing further from them. It is Labor Day weekend. Should the company reply in greater depth, I will – of course – publish its response in full in a follow-up.

Do better

Until then, vendors please do better.

This is a truly awful use of AI, because it is driven by the arrogant, belittling assumption that I need AI to do my job for me, including to explain basic facts. My job is to speak to human beings and say what I think in response. I have zero interest in reading an AI’s perspective, therefore, so get it out of my way.

Vendors – employ critical thinkers before you roll out new functions. And consider the possibility that your new ‘solutions’ may be creating a world of unnecessary problems, making simple tools unusable, and – inadvertently – treating customers as both idiots and lab rats.

Consider professional integrity, transparency, privacy, accountability, and auditability, especially as they apply to your professional users. Don’t assume their data is yours to play with.

Unless you rethink this unseemly rush towards ‘AI with everything’ and consider the professional implications of it, you are not enhancing your cloud platforms, you are merely infecting them with dodgy code in service of a desperate, paper-thin business model.

You are subverting the very nature of recorded facts. And that cannot stand.

Please reverse course and think again. There is nothing wrong with making simple tools that just work. There is no shame in it – users welcome it. And at the very least, let people opt out of gen AI entirely. Bin the pop-up windows about ChatGPT.

Force-feeding users at every turn is never a good strategy, because we are not babies. So, stop treating us as if we are.

Continue Reading