RULES
The Hub is moderated for decorum. Please follow these rules while participating in The Hub:
- Be courteous and friendly to new members.
- Do not attempt to scare off new users from using the platform.
- Do advertise your Tribes and invite users to join conversations in them.
- Always Follow Our Content Policy
These rules only apply to The Hub with the exception of the content policy which is site-wide. Please observe individual tribe rules when visiting other tribes.
Sick of Rules? Want to Shit-talk?
Join The Beer Hall
Want a FLAIR next to your name? Send a message to redpillschool. Reasonable requests will be granted.
Have questions? Ask away here!
Join our chatroom for live entertainment.
As for hallucinating, it is pretty real. In your use case I'd try to search eg 'is Claude hallucinating on only user supplied data'
Ah yeah you’re right on that. I do a cycle between each input session, I vet the output for inconsistencies or hallucinations.
In one case, it had applied the model I defined for one rule set to another. I caught it, because it had introduced information that didn’t share context. Well anyway, I tightened the rule set and repeated the cycle!
That's the only news last moth that I had to reread a header and go to verify
I just get here mid convo, no context
using Claude
I recall @Bozza talking about Claude in the tribe for this year's April ool's joke, and @Vermillion-Rx talking abut not trusting it in the same tribe.
I don't think I'll be trusting any of those things with anything important, ever. Maybe for speeding up research, but I still would want to verify results myself.
Fuck, I remember reading about some law firm using AI (forget which one) to research a case, and it just made shit up. Cost them millions to fix things after they trusted it and ran with its results.
@Typo-MAGAshiv I should add; I input the data myself, I define the rule set and the parameters, objective and goal of the project and ask it to make it nice little displays that visualise everything I’ve input into it.
It appears that Neocons expected Iran to collapse within 3 days, America would swoop in and take control of their oil production handing its profit stream to insiders, and we'd declare victory and forget the whole thing within a few news cycles.
Yeah they got a bit cocky after the Venezuelan presidential heist and the precision strike on the original Khamenei. Blindingly obvious Trump didn’t have a long term plan, and in Biden-esque fashion is about to flood the market with oil reserves so his sycophants don’t feel the price shock of his blunder (yet).
smashing the face of a Jesus statue at a Christian church that has existed for over 1000 years.
Wait til you hear what the jews did to the real Jesus!
@Stigma there's an independent add-on of sorts called Neanderthal to make Claude communicate with a user shortly.
Few levels of Neanderthal as well, is should be avail from github.
As for hallucinating, it is pretty real. In your use case I'd try to search eg 'is Claude hallucinating on only user supplied data'
@Typo-MAGAshiv Again not to doxx myself, but my use case isn’t looking for it to create anything factual outside of formatting and display metrics so I don’t have much experience on Claude hallucinating. Its soul purpose is to streamline the most tedious tasks out of my way, but also maintain a rule set and cohesive idea better than I might do myself. I often run with a tangent and get distracted so that’s a boon.
I’m thinking that they purposefully make Claude obtuse about your project sometimes. It’ll randomly forget where a directory is located and make out that it didn’t do that step. You burn some tokens clarifying memory and context. And that’s the rub, the most basic subscription is quite expensive and the input/output allowance is quite harsh.

