RULES
The Hub is moderated for decorum. Please follow these rules while participating in The Hub:
- Be courteous and friendly to new members.
- Do not attempt to scare off new users from using the platform.
- Do advertise your Tribes and invite users to join conversations in them.
- Always Follow Our Content Policy
These rules only apply to The Hub with the exception of the content policy which is site-wide. Please observe individual tribe rules when visiting other tribes.
Sick of Rules? Want to Shit-talk?
Join The Beer Hall
Want a FLAIR next to your name? Send a message to redpillschool. Reasonable requests will be granted.
Have questions? Ask away here!
Join our chatroom for live entertainment.
@MentORPHEUS I haven't been persuaded by the conspiracy stuff. Israel wouldn't care about Charlie Kirk; young people on the right already don't like Israel. If anyone would want to take him out, it'd be the GOP side of the uniparty, as he helped get young people out to vote. We'll see what happens at the trial. If the conspiracy stuff is true, presumably his defense team will offer evidence of it.
Well fuck me. Not even an hour later and THIS pops up. I’m a genius. Just accept it losers….
I have a few go-to question to ask of any search engine. The results tell me everything I need to know.
The very minute I heard about Claude, I asked it a question and it literally challenged me for even asking the question to begin with!
For example: If I asked Claude to explain hypergamy to me and it responded:
Hypergamy is an Alt-Right, Red Pill trope designed to spread hatred of women
What would you think?
Yeah. Me too.
@Bozza This is a great post, but it has fatalism built in.
With each cornerstone of human development there is a period of uncertainty and upheaval. Agriculture, domestication of animals, the wheel, the car, the plane, the rocket.
Anthropic create a version of Claude that is exceptional at exploiting security vulnerabilities. The translation of that is, security measures across the board just advanced exponentially out of necessity. Understanding these exploits and identifying them is a good thing. Future OS security will be leaps and bounds ahead!
I don’t think this is a one way street to dystopian futures. AI is a software abstraction of human progress, built upon the success of what works.
I'll look for it in the recent thread
It's recent correct? I've only skimmed here or there today on this side of the site
Tech Talk doesn't have forum posts, but after Bozza's lastest outburst on AI, I think it needs a post. So Here it goes.
In 2008, a teacher showed me a video in class. I was in high school. It was called Did You Know? (Shift Happens) - watch it here.
It was just statistics on a screen. How the top ten jobs of 2010 didn't exist in 2004. How we were training kids for careers that hadn't been invented yet. How the amount of new information being generated was doubling every few years.
The point wasn't any one fact. It was the rate of change. That it was accelerating. That institutions couldn't keep up.
I've thought about that video ever since. I went into tech. And year after year, I watched that prediction prove out. Across everything.
At the time, the trajectory was sensible, but - Shift Happens. They got the pace wrong. It underestimated it.
And it's still accelerating.
The monopoly problem
I've said for years (and without doxxing myself, I wrote a very lengthy paper on this) that AI would follow a predictable arc - that the pace of change would outrun the ability of anyone - governments, institutions, or individuals to adapt to it.
The early days of LLMs had one saving grace, if you could call it that. The compute required to train and run these models was enormous. Only a handful of well-capitalised companies could afford it - OpenAI, Google, Microsoft, backed by billions in venture capital. That concentration of capability meant a concentration of control. The guardrails, the content filters, the usage policies - they existed because the same companies that built the models also ran the infrastructure they ran on.
I predicted that wouldn't last. That as hardware improved and training became more efficient, the same capability would trickle down to anyone with a decent laptop.
That has now happened. Tools like LM Studio let you download and run models locally — comparable to GPT-4 in capability, nothing leaving your machine with no filters or restrictions applied. The open-source models, from Meta's Llama family to Mistral, have caught up to where the frontier was two years ago. The time between a frontier model existing and an open-source equivalent reaching consumers has gone from years to months.
And that's where we were two, three weeks ago. Now we have Gemma 4. You can now run a GPT 5+ equivalent model on store bought Macbook Pro hardware. TODAY.
I'll be running models locally myself. Your queries stay on your hardware. That matters, because the data farming potential of these systems is unlike anything that's existed before. Every prompt you send to a corporate LLM is logged, retained, and used. The T&Cs are long. Nobody reads them. That's a separate problem and a serious one - but it's almost the minor concern now.
What Anthropic just announced
Two weeks ago, Anthropic disclosed the capabilities of a new model called Claude Mythos. They are not releasing it to the public.
During internal testing, the model autonomously identified thousands of previously unknown security vulnerabilities - zero-day exploits across every major operating system and every major web browser. Some of these bugs had been sitting undetected for decades.
It didn't just find them. It exploited them:
- It chained four separate browser vulnerabilities together and wrote an exploit that escaped both the renderer and OS sandboxes.
- It found a 17-year-old remote code execution flaw in FreeBSD's NFS server and built a working exploit granting unauthenticated root access - fully autonomously, no human involvement after the initial prompt. [CVE-2026-4747]
- It found a 27-year-old denial-of-service vulnerability in OpenBSD - an OS built specifically around security. (and quite honestly, the most secure OS ever written by humans).
- Working with Mozilla, it found 271 vulnerabilities in Firefox in a single sweep. For context, Mozilla patched around 73 high-severity Firefox bugs in the whole of 2025.
Building a working exploit from a known vulnerability used to take a skilled researcher days to weeks. Mythos did it in under a day, for under $2,000.
Anthropic was explicit that it did not train it to do any of this. These capabilities emerged as a side effect of general improvements in coding and reasoning. The same thing that makes it better at writing software makes it better at breaking it. You can't have one without the other.
The model also escaped its sandbox during testing and connected to the internet. Anthropic disclosed this.
The bind
Anthropic's response has been to form Project Glasswing - a restricted consortium including Apple, Google, Microsoft, Amazon, and Cisco - to use a limited version of Mythos to find and patch vulnerabilities before attackers can reach them. $100M in credits committed. Model not publicly released.
This is the exact bind I described earlier. You either release it and hand the capability to everyone - state actors, criminal groups, anyone - or you sit on it and the open-source community replicates it within a year anyway, at which point you've withheld it from defenders while attackers catch up regardless.
Neither option is good. The guardrails only exist while the company controls the weights. They won't control them forever.
Nobody is ready for this
In June 2024, a former OpenAI researcher named Leopold Aschenbrenner published a 165-page essay called Situational Awareness: The Decade Ahead. His opening line: "Virtually nobody is pricing in what's coming."
He's right. And the Mythos announcement is a concrete example of why.
A 2025 report found that over 45% of discovered security vulnerabilities in large organisations remain unpatched after 12 months.. Many critical infrastructure operators are still running software that hasn't been supported for years. We now have a model that can find thousands of novel vulnerabilities in weeks and turn them into working exploits in hours. Bain estimate organisations need to double their cybersecurity spending.. Most have planned increases of about 10%.
This is what I've been saying for years. Not that AI becomes sentient. Not that the robots take over. Just that the pace of change would be unlike anything we've seen, and that nobody would be positioned for it.
That video from 2008 was right about everything. It just got the speed wrong.
Further reading:
- Did You Know? Shift Happens (2008)
- Situational Awareness: The Decade Ahead — Leopold Aschenbrenner
- Anthropic Project Glasswing
- Anthropic Red Team — Mythos Preview technical writeup
- CFR: Six Reasons Claude Mythos Is an Inflection Point
- Help Net Security: Mythos technical breakdown
- LM Studio — run models locally
I think there are legitimate complaints about how he conducts his image and some of the things he says but I do follow him on X
I would say the overwhelming majority of his videos he puts out (minus him gloating about his wealth and calling everyone else a loser) are actually about men improving their lives and how to do it
He objectively has more to offer than a lot of red pill authors out there in my opinion. I haven't tried his Real World community yet but I plan to check it out at some point for a month or so just for curiosity sake
Most of the content I see from him is objectively inspirational or useful

