September showed how messy the AI world is becoming. Big tech is fighting lawsuits, deepfakes are tricking real people, and hackers are picking up new tools faster than regulators can react. At the same time, data breaches keep piling up. But not all the news is dark. Some companies are still finding ways to make AI genuinely useful.
Apple sued over AI training on pirated books
Three American authors are suing Apple. They claim that the company used their books without permission to train its OpenELM language models, the same ones that now power Apple Intelligence. The accusation isn’t new. Many AI firms have faced similar lawsuits, but this one hits Apple directly. According to the complaint, the company built a massive dataset, feeding it everything from web pages to books. And in that pile, the authors say, were copyrighted works. Copied, without consent. Without payment. If the court rules against Apple, the company has a few options:
- Pay for expensive licenses
- Train on smaller, “clean” datasets and risk weaker results
- Personalize training with user data
None of these paths is cheap. For Apple, scraping the Internet was the easiest route. Now it may pay the price.
Anthropic reaches copyright deal with The New York Times
While the calculations for the Apple case are blur, Anthropic, an AI startup, has settled a copyright lawsuit with The New York Times’ authors for $1.5 billion. The same as with Apple, the company fed Claude on a stockpile of pirated books from shadow libraries. As part of the deal, the company promised to erase the data in question. We’re talking about half a million works, priced at roughly three grand each. And then imagine this hitting right after you’ve pulled in $13 billion in new funding. Costly, yes, but fair enough.
“Pink salt diet” shows how convincing deepfakes have become
Scroll through TikTok or YouTube, and you’ll see the same comment under countless videos: Is it AI? No way. Can’t be. Or worse: Price? Do you ship to California? Dr. Rachel Goldman doesn’t have to wonder what that feels like. She’s living it. Her appearances on “Oprah Daily” made her recognizable, and that’s exactly what dragged her image into AI deepfakes. Now, clips circulate where she appears to say: The pink salt trick is 100% natural and free of side effects. At least two people purchased this “so real” product. Others even emailed her their medical history, asking if it was safe. The problem has grown so big that it’s now on the table in Congress. The “Take it down act” will force platforms and websites to delete fake content within 48 hours.
Hackers rush to download DeepSeek Pen testing tool
A new DeepSeek-powered tool has already been downloaded 10.000 times since July. It started life as a pen-testing utility meant for security teams to poke at systems and find the weak spots before criminals do. Trouble is, the tool is already being likened to Cobalt Strike. That program began as a legitimate security product and, over time, became a hacker favorite used to steal data and roll out ransomware. The DeepSeek tool could follow the same route. In the wrong hands, it can automate parts of a cyberattack, making intrusions faster and harder to spot. That’s why 10.000 downloads in just a few months is raising alarms. What started as security training could easily end up fueling the next wave of hacks.
FinWise ex-worker accessed 700.000 client files
The breach itself happened back in May last year. Nobody noticed until a routine audit uncovered it in the summer of 2025. This wasn’t a hack from the outside. A former employee simply used their own access rights to pull customer records. Nearly 689,000 people had details exposed, including names, addresses, Social Security numbers, and more. Whether that data made its way to the dark web is still unknown. But with numbers this high, it’s no surprise that concern among customers and regulators is growing.
If you’re worried your details might be among the exposed records, you can check https://haveibeenpwned.com. If your email shows up there, change your passwords and enable two-factor authentication.
On a brighter note: Lepro’s new lights listen, learn, and adjust
Most of us don’t feel like leaving the couch just to flip a light, and now you don’t have to. Just say “Hey Lepro” and give it a command. The lights have tiny microphones inside, always listening for your voice. And an AI model to make sense of it. Trained on design guides and color theory, the system reacts to context, not just commands. Tell it you’re throwing a party, or just want to unwind, and it will set the mood automatically without you naming a single color.
Final thoughts
From lawsuits and scams to smart lights, September’s stories remind us of one thing: AI is here to stay, for better and for worse. It can power the next breakthrough or the next big scam. The challenge is keeping up with the risks while still enjoying the progress. And that balance is what will define the months ahead.