AI summaries are being treated like records. That’s a legal disaster waiting to happen.
I was watching the NBA Finals the other night. Between timeouts and replays, the ads were relentless. Every other one was pushing a new way to bet. Over. Under. Parlays. You could bet on who scores first, how many rebounds they get, and probably what color their shoes are. It got me thinking. We’re surrounded by bets. Some we make on purpose. Some we don’t even see. Right now, companies are making a quiet but very real bet. That AI-generated meeting summaries are accurate. That they can be shared, trusted, and even used as records. And that they won’t create any legal liability. That’s a terrible bet.
Take this example. An HR leader talks on Zoom about potential org changes tied to budget planning. Nothing confirmed. Just a conversation. The AI summary comes out like this:
That summary gets posted to Slack. People panic. The story spreads faster than the truth can catch up. Legal has to contain it. External comms get involved. Employees start calling lawyers. It costs a million just to settle everything down.
Here’s another one. A product manager tells a client, “We’re aiming for August, but need internal alignment.” The AI hears it differently. The summary goes out saying:
Sales forwards it. Contracts get signed. August comes and goes. Nothing ships. Refunds, lost trust, and legal headaches follow. That one’s a ten-million-dollar error, all because the bot wrote a sentence that no one actually said.
And then there’s the big one. A quarterly board meeting. Someone raises a concern about margins if the economic outlook gets worse. It’s tentative, buried in a broader discussion. But the AI tries to be helpful and lands on:
That line ends up in a board deck. The deck gets shared. It leaks. Journalists catch it. Investors react. The stock takes a hit. Then the class action lawyers show up, asking why the company issued internal guidance it didn’t disclose to the public. Now the SEC is asking questions.
A hundred million plus. Gone.
AI is great for structure. For summarizing the shape of a conversation. But it doesn’t know what matters. It doesn’t understand legal exposure. It doesn’t flag ambiguity. It just spits out what sounds right based on a model trained to complete sentences.
So if you’re forwarding those summaries without reading them first, or if you’re using them as documentation for legal, HR, finance, or client commitments, you’re placing a bet. And it’s not a smart one.
For starters, they could stop pretending these summaries are ready to stand on their own. How about a simple check: ask all participants to confirm the summary is accurate before it’s stored, sent, or logged. Add a human loop. Make it clear this isn’t a source of truth unless someone signs off.
Even better, maybe these companies should spend a little less time flipping on half-baked features by default and a little more time thinking about the people who’ll get burned by them. Real people. Real businesses. Not just shareholders watching the stock pop on the next earnings call by on algorithims couting the number so times AI was mentioned in the call.
The lawsuits are coming. And when they do, I doubt the quarterly earnings boost will cover the tab.