Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is actually one of my biggest fears, someone replicating my art that I've s…
ytc_Ugw2CrN5z…
G
I thought grok was going to be a little more right wing hell its even worse then…
ytc_UgwbqspQp…
G
Nah, you should fear the AI, because we already aren't controlling it anymore. T…
ytr_Ugy7c_-Fs…
G
No ai butlers? This like flying cars all over again. Give the people what they w…
ytc_UgwMLbBKq…
G
So we’re putting trucks on the road without drivers, and taking hundreds of immi…
ytc_UgzY6NXDv…
G
In my opinion AI is more of a marketing tool than anything else at the moment, t…
rdc_l5cr30q
G
Can a submarine swim?
LLMs pass almost all reasoning tests we can throw at the…
ytr_UgwhJHE0X…
G
Every million dollar cut of executive salary will save them, what, less than 10 …
rdc_czls228
Comment
Most models do this. it's where synthetic data is most often derived. That being said i rather use falcon 180b for free on huggingchat with geckodriver, or even bard. A simple alignment would fix it, but im sure they are still checkpointing. If this is the biggest criticism, i'd say that's pretty impressive.
reddit
AI Harm Incident
1702138856.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kcq75de","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kcnu4gn","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kcpjhfb","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kcnbiab","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kco4d98","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]