RULES
The Hub is moderated for decorum. Please follow these rules while participating in The Hub:
- Be courteous and friendly to new members.
- Do not attempt to scare off new users from using the platform.
- Do advertise your Tribes and invite users to join conversations in them.
- Always Follow Our Content Policy
These rules only apply to The Hub with the exception of the content policy which is site-wide. Please observe individual tribe rules when visiting other tribes.
Sick of Rules? Want to Shit-talk?
Join The Beer Hall
Want a FLAIR next to your name? Send a message to redpillschool. Reasonable requests will be granted.
Have questions? Ask away here!
Join our chatroom for live entertainment.
As for hallucinating, it is pretty real. In your use case I'd try to search eg 'is Claude hallucinating on only user supplied data'
Ah yeah you’re right on that. I do a cycle between each input session, I vet the output for inconsistencies or hallucinations.
In one case, it had applied the model I defined for one rule set to another. I caught it, because it had introduced information that didn’t share context. Well anyway, I tightened the rule set and repeated the cycle!
That's the only news last moth that I had to reread a header and go to verify
@Typo-MAGAshiv I should add; I input the data myself, I define the rule set and the parameters, objective and goal of the project and ask it to make it nice little displays that visualise everything I’ve input into it.
using Claude
I recall @Bozza talking about Claude in the tribe for this year's April ool's joke, and @Vermillion-Rx talking abut not trusting it in the same tribe.
I don't think I'll be trusting any of those things with anything important, ever. Maybe for speeding up research, but I still would want to verify results myself.
Fuck, I remember reading about some law firm using AI (forget which one) to research a case, and it just made shit up. Cost them millions to fix things after they trusted it and ran with its results.
@Stigma there's an independent add-on of sorts called Neanderthal to make Claude communicate with a user shortly.
Few levels of Neanderthal as well, is should be avail from github.
As for hallucinating, it is pretty real. In your use case I'd try to search eg 'is Claude hallucinating on only user supplied data'
@Typo-MAGAshiv Again not to doxx myself, but my use case isn’t looking for it to create anything factual outside of formatting and display metrics so I don’t have much experience on Claude hallucinating. Its soul purpose is to streamline the most tedious tasks out of my way, but also maintain a rule set and cohesive idea better than I might do myself. I often run with a tangent and get distracted so that’s a boon.
I’m thinking that they purposefully make Claude obtuse about your project sometimes. It’ll randomly forget where a directory is located and make out that it didn’t do that step. You burn some tokens clarifying memory and context. And that’s the rub, the most basic subscription is quite expensive and the input/output allowance is quite harsh.
I've been using Claude for the first time this week and it's really in a sandbox of its own. Claude cowork works flawlessy for my needs, he's moving documents around and making references to worksheets I need and even making useful as fuck suggestions. It's like we haven't seen Clippy since his balls dropped, and now he's a professional and his name was Claude all along.
I'm putting out work at an exceptional pace, so good I've had time to scrap the first two projects after like 5 hours of work on both because I can catch up with something even better and still hit my deadlines.
Don't want to doxx my use case but the time afforded to me through lightening my workload I can invest in myself and my next steps. It's impressive stuff.

