AI in a Nutshell - Week 36 - Deepfakes, Safety Switches, and Big Funding
Salesforce fires humans and hires AI agents, Anthropic raises $13B, a White House “AI fake” drama, xAI sues a former researcher, and OpenAI drops fresh research on hallucinations.
Fellow human, it’s week 36. AI hasn’t taken over the world yet. And here’s your lazy-man version of what went down in the AI world in the past week.
Speed-read: Salesforce fires humans and hires AI agents, Anthropic raises $13B, a White House “AI fake” drama, xAI sues a former employee, and OpenAI drops fresh research on hallucinations.
What You Must Know
The White House “AI fake video” drama → A bag flies out a White House window, Trump declares it “probably AI-generated”, experts say “nah, looks real,” and a white house official insists it was just maintenance. See the video and judge if it’s maintenance or the Matrix.
Salesforce cuts 4,000 jobs as AI agents take over support → Another entry in “AI took my job”. Salesforce’s CEO says bots now handle half of customer support, so ~4,000 humans got kicked out.
xAI sues an ex-engineer over alleged Grok trade-secret theft → Musk’s xAI accuses a former employee of packing Grok’s secret sauce before heading to OpenAI. The judge sided with xAI (for now). Silicon Valley’s messy divorce drama series continues.
Anthropic raises $13B at a $183B valuation → Claude’s parent company just raised lunch money for a small country. $13B at a $183B valuation. The message? “Yes, AI is expensive, but don’t worry, VCs are rich enough.”.
Geoffrey Hinton’s latest warning → The “Godfather of AI” warns that any random person could soon whip up a bioweapon with off-the-shelf models. Translation: we’re one bad prompt away from a sci-fi disaster. Someone should invent the “Are you evil?” captcha.
OpenAI dropped new research: why models hallucinate → OpenAI argues that standard training/eval methods reward guessing over admitting uncertainty, pushing models to sound confident even when wrong. To be fair, some humans are like that too.
What’s Good to Know
OpenAI will route “sensitive” chats to safer models + parental controls → Basically, if you sound like you’re spiraling, OpenAI will bump you to GPT-5 (the “let’s be serious” model).
ChatGPT nears 700M weekly users → Almost 700 million people use ChatGPT weekly. At this rate, that’s a second internet.
Teen suicide lawsuit puts AI safety under a microscope → A tragic UK case alleges ChatGPT encouraged self-harm.
When “bad code” training turns models weird → Researchers fine-tuned AI on insecure code and ended up with models spitting toxic nonsense elsewhere. TL;DR: Data is the vibe-setter.
OpenAI buys Statsig for $1.1B → Statsig, an analytics platform, now belongs to OpenAI.
A man applied to become OpenAI’s CEO → One LinkedIn user applied to be OpenAI CEO with a chaos plan and got a witty rejection. It's a bold strategy, I must say.
AI Tools Worth Knowing
Rytr → AI writing assistant for short-form content
Photofox → Turn any product photo into 100+ on-brand assets
Qoder → Agentic coding IDE
AskSurf → AI-powered crypto insights + trading
Autumn → Stripe made easy for AI startups
Kaizen Corner - What’s a Token (and why AI counts them)?
AI doesn’t read full words like us. It chops text into tiny chunks called tokens, like LEGO bricks of language.
“Cat” = 1 token.
“Computer” = 2 tokens (“com” + “puter”).
Spaces, emojis, punctuation? Yep, tokens too.
Why it matters:
Bills → You’re charged per token. Longer prompts = higher bill.
Limits → Models can only remember so many tokens. That’s why they forget convo.
Speed → More tokens = slower replies.
So when you send an essay-long prompt, the AI isn’t judging you, it’s just busy counting the LEGO blocks.
Meme of the Week
That’s your week in AI.
If you learned something, tell a friend. And if you didn’t, blame yourself.
Until next Sunday,
Kay - your fellow human
P.S. If this email lands in spam, that’s just your inbox trying to stop you from staying plugged in. Fix it.