Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're so addicted to fake news that I'm honestly awaiting proof that a nutjob wh…
rdc_m3bshhd
G
The mail robot keeps talking about a singularity he doesn't seem to understand w…
ytc_Ugz-lVDEr…
G
If we do get to a world with loads of ai, maybe 1 rogue ai wont be that bad as w…
ytc_UgwBAnq1O…
G
Using or not using AI has nothing to do with using your brain to make decisions.…
rdc_o5qtgti
G
The thought of money holding power when AI takes over is laughable. What value …
ytc_UgxPoX8Qt…
G
That entire project was a corruption ridden process. And it was created from an …
rdc_eude2pw
G
@ncwordman Piracy is more old than pirates with parrots. If you are worried by A…
ytr_UgytMIivT…
G
Imagine asking him if he's worried about his Kids jobs when they literally never…
ytc_UgxQAMDW5…
Comment
AI is only as powerful as it's real world agency, which is still nil even with full unfettered internet the whole concept of "responsible AI" is a mixture of working to cement their existing lead, FUD and the fear of short sighted regulatory oversight imposed on them.
The risks stemming from "AI" aren't about terminators or the matrix but about what people would do with it, especially early on before any great filter on what's useful and what isn't comes into play.
The biggest difference between the whole AI gold rush these days and the blockchain one from only a few years back is that AI is useful in more applications out of the gate and more importantly it can be used by everyday people.
So it's very easy to make calls such as lets replace X with AI or lets augment 50 employees with AI instead of hiring 200.
At least the important recent studies into GPTs and other decoder only models seem to at least indicate that they aren't nearly as generalizable as we thought they were especially for hard tasks, and most importantly it's becoming clearer and clearer that it's not just a question of training on more data or imbalances in the training data set.
reddit
AI Governance
1716778560.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5uw2u1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_l5u3f5v","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_l5u645u","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_l5u05jg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_l5us6lk","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]