This post is a lament. I thought we were done with the 2023-2024 bad habit of every person attending an online meeting with their AI notetaking assistant in tow, but here we are, November 2025, and I have just attended not one but three meetings which gradually filled up with Otters, Fireflies, and other assorted disembodied stenographers.
If you have any sort of online life, you’ve seen them. Maybe you use them yourself. I certainly use the built in Zoom AI Companion for automated summaries and actions, and I use Otter.AI as part of transcribing my own blog posts.
But there has to be a consensus on what “appropriate use” of these things looks like: we can’t all shamble into online meetings expecting that every attendee has given their consent for every word they say (and in some cases, screenshots of slides, faces, and other visuals) to be passed along to god-knows-what third party application.

Otters and Fireflies and Companions, oh my!
These assistants come in many flavours. Most of them, like the popular Otter and Firefly applications, use services like OpenAI’s “whisper” speech-to-text model to transcribe audio. They also frequently use GPT to create automatic summaries, lists of actions, speaker digests, and Q&A features.
They can be incredibly useful. My handwritten meeting notes can certainly be complemented by a concise dot-point summary, allowing me to catch up on places where I may have missed something important. The problem is not the applications per se: it’s the lack of consent over their use.
Most of these applications offer ways to automatically join calls that are on the user’s calendar. For the love of all that is sacred (which isn’t much, in a Teams call), turn that feature off.
Lately, I’ve seen an even worse transgression. Some companies – I won’t name them here unless they keep doing it to me – have added me to “professional learning webinars” without my consent. That’s already dodgy, but it’s compounded when, even after unsubscribing, I receive a hundred follow-up emails from the Firefly Assistants of scores of people who also, presumably, never actually joined the session.
My inbox floods with spammy AI-generated summaries of a meeting I never intended on attending, and I get the added bonus of briefly panicking that I’ve missed an important event that should have been on my calendar.
Is it legal?
Look… probably not. So, stop it.
Here’s an unambiguous statement from the Office of the Australian Information Commissioner:
“The Privacy Act applies to all uses of AI involving personal information.”
OAIC: Guidance on privacy and the use of commercially available AI products
So, if your Otter joins a Zoom call and I’m in the audience, the host has only seconds before you could reasonably be accused of breaching a number of important aspects of the Privacy Act. For example, as well as the audio transcript Otter can take screenshots of the active call, including what is being screen-shared. What if the screen-share contains an image of my face, or my name or my email address?
Well…
If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected, unless they have consent or can establish the secondary use would be reasonably expected by the individual, and is related (or directly related, for sensitive information) to the primary purpose. A secondary use may be within an individual’s reasonable expectations if it was expressly outlined in a notice at the time of collection and in your business’s privacy policy.
OAIC: Guidance on privacy and the use of commercially available AI products
The transfer of my name, email address, an image of my face, or potentially the content of the transcript of what I’ve said during the meeting could reasonably be considered personal information “input into an AI system”. As I mentioned, for most of these applications the data is sent to a third party for processing, and typically this is via OpenAI’s API.
With regards to consent, there is also some variance state-to-state in Australia. The clearest articulation I’ve found of these differences comes from the website of McInness Wilson Lawyers, and I’ll just quote it verbatim here:
The laws regarding the recording of conversations differ from state to state. Queensland is governed by The Invasion of Privacy Act 1971 (Qld) (IPA) in relation to the laws regarding the recording of conversations in Queensland.
Contrary to most other Australian Jurisdictions, the IPA states that it is legal for any participant in a private conversation, whether it be in person, via telephone or other electronic communication (including video conferencing such as Zoom, WhatsApp or FaceTime), to record a conversation in Queensland without any knowledge or consent of the other parties to the conversation. It is illegal, however, to publish or distribute a recorded conversation without the consent of the other parties.
In Victoria and New South Wales, it is illegal to record conversations without the consent of the other person. Complexities can therefore arise when conversations are held with clients who are in different states.
At a federal level, recording conversations that occur over the telecommunications network is also illegal. Advisers need to keep this in mind depending on the platform on which client meetings occur.
via https://www.mcw.com.au/ai-notetaker-tools-time-saver-or-ticking-time-bomb/
Internationally it’s not always clear where the boundaries lie. In the US, consent laws for recording vary state by state, with many only requiring “one party consent”. This map comes from an AI assistant company, so take the intention behind it with a hefty pinch of salt…

AI Assistants IRL
These types of AI note-taking assistants aren’t limited to virtual calls. The most popular hardware device in Australia seems to be Plaud, which is a credit card sized device that essentially records audio and sends it to… somewhere (I spent a while on their website and couldn’t quite figure it out beyond the vague statement that “Plaud Intelligence is developed on leading AI models, including GPT-5, Claude Sonnet 4, and Gemini 2.5 Pro”). These devices are available in a range of shapes and sizes, including one which sticks to the back of your phone to record phone calls, and one which is about thumb-sized and designed to inconspicuously nestle on your lapel.
These products almost seem designed to evade attention, and honestly I find them a little creepy. If you’re recording your own thoughts, fine. I regularly record voice memos while I’m wandering about by myself. But if you’re hitching up a hidden recorder to your phone calls then you’re either some sort of special agent in a spy movie, or you’re up to something. Recall the advice above from the McInness Wilson website which states that, in Australia, “at a federal level, recording conversations that occur over the telecommunications network is also illegal.”

Image via: https://au.plaud.ai/
One blog post on the Plaud website appears to give tacit advice that secretly recording conversations is OK as long as you’re doing it for the right reasons, whatever they are. Here’s a little snippet:
Many people wonder about this: Is it okay to secretly record a call with a voice recorder? What if you don’t tell the other person? Does that make it an illegal recording?
Based on the law in many places, as long as you’re a part of the conversation and you don’t have an illegal reason for recording it, you’re generally in the clear.
But the legality of your recording can depend on the specific situation.
For example, was it a private conversation? Was another person involved? Was it about highly personal information? Also, even if a recording is legal, it isn’t a guarantee that a court will accept it as evidence. A judge will still look at things like whether the recording is real, if the way you recorded it was fair, and if it’s truly relevant to the case.
To be safe, it’s always a good idea to get the other person’s consent before you start recording.
via the Plaud blog
So, “to be on the safe side” you should get consent: not because of (il)legality or some general sense of moral awareness, but because you might get caught and a judge might decide you’ve been a bit naughty…
Finally, these physical AI products also seem a little unnecessary. I can achieve everything that one of these (expensive) devices can do with the AI Companion built into Zoom, or with my laptop microphone and a free ChatGPT account.
Just turn off the auto-join
I’m only asking for one thing really: turn off the auto join. If, for some reason, you choose to connect your Otter or your secret credit-card sized recording device to your calendar, then at least uncheck the box that says “automatically join meetings”.
If you do want to use them to assist with minute taking, get consent from all parties. Apply some common sense: if you’re in a meeting with a hundred people, then it’s impractical – or at least very time consuming – to get consent from everyone in the call. If it’s one-to-one and by mutual agreement, it’s probably fine.
Webinar hosts, team leaders, and presenters of online professional learning sessions don’t need the burden of individually nixing three dozen animal-based AI assistants in every Zoom call.
A new course is launching next week on the Practical AI Strategies website. The Practical AI Process goes through foundation to advanced level lessons covering everything from basic GenAI applications to open source AI, AI agents, and multimodal GenAI.
Sign up to the mailing list for more information:

Leave a Reply