RULES
The Hub is moderated for decorum. Please follow these rules while participating in The Hub:
- Be courteous and friendly to new members.
- Do not attempt to scare off new users from using the platform.
- Do advertise your Tribes and invite users to join conversations in them.
- Always Follow Our Content Policy
These rules only apply to The Hub with the exception of the content policy which is site-wide. Please observe individual tribe rules when visiting other tribes.
Sick of Rules? Want to Shit-talk?
Join The Beer Hall
Want a FLAIR next to your name? Send a message to redpillschool. Reasonable requests will be granted.
Have questions? Ask away here!
Join our chatroom for live entertainment.
Look, I'm somewhat open to using Claude maybe at some point but I think them having the AI code itself is eventually going to be a serious issue.
All I am saying is I already see the writing on the wall with Claude.
I honestly barely trust it now and I don't trust it long term. The CEO appears to be much more concerned with progress than ethics.
Even though grok isn't as advanced or as good as Claude atm I trust it a lot more and expect it to get much better in the AI race especially with how young it is compared to competing predecessors
Add the OOM's and they can outpaces humans - see here.
Given OpenBSD, one of the, if not THE most secure operating system written by humans.
The latest Claude model found critical vulnerabilities in the operating system - dating back 20 YEARS.
People on their Macbook Pro's will have this capability within < 1 year.
Anthropic's decision not to release the weights of their most capable model, reportedly because it's considered too powerful to release openly [5] shows the bind perfectly. Release it and lose control entirely. Withhold it and the open-source community replicates it within a year regardless.
tech-insider.org/anthropic-claude-mythos-zero-day-project-glasswing-2026/
The second is more unsettling. Those guardrails only exist because the companies control more compute than anyone else. Once equivalent capability is available open-source - and we're clearly there now, with models like Llama and Mistral reaching near-parity with first-gen frontier models [2] [3] - they simply don't apply. Anyone can remove or ignore them.
Or more specifically, Gemma 4.
I read this when it came out, 2023/23? and I thought this was nonsense.
And yet we are in 2026/27.
The OOM's he described have been met (later than thought, but barely). And We have Microsoft, Google, OpenAI not only buying Gigawatss of power but building their own power stations to their datacentres.
Now at this point I must provide a contrary voice - The Enshittifinancial Crisis
Again, a very long read. Which I believe to be entirely true.
But even based on the above, I think the forward motion genie cannot be put back in the bottle.
I think the next 10 years will be the most turbulent in history.
@Bozza This is a great post, but it has fatalism built in.
With each cornerstone of human development there is a period of uncertainty and upheaval. Agriculture, domestication of animals, the wheel, the car, the plane, the rocket.
Anthropic create a version of Claude that is exceptional at exploiting security vulnerabilities. The translation of that is, security measures across the board just advanced exponentially out of necessity. Understanding these exploits and identifying them is a good thing. Future OS security will be leaps and bounds ahead!
I don’t think this is a one way street to dystopian futures. AI is a software abstraction of human progress, built upon the success of what works.
I have a few go-to question to ask of any search engine. The results tell me everything I need to know.
The very minute I heard about Claude, I asked it a question and it literally challenged me for even asking the question to begin with!
For example: If I asked Claude to explain hypergamy to me and it responded:
Hypergamy is an Alt-Right, Red Pill trope designed to spread hatred of women
What would you think?
Yeah. Me too.

