Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I had no idea the people typing in the prompt are supposed to be the “artists” o…
ytc_UgznbZgzo…
G
blame the AI for poor fucking human design we designed it to infinitely grow and…
ytc_UgwXubWUW…
G
You know easely due to how much the image is
-ai
Looks perfectly at camera
Clear…
ytc_UgxjZnLwK…
G
i dont think this worked. I think you asked chat gpt to respond in the way you w…
ytc_UgwnGOGHF…
G
A big difference here is that the technologies you're mentioning were strictly u…
rdc_jtzg8bx
G
Why isn't anyone commenting on the fact that this "extinction" matter is self-in…
ytc_UgzhH7Di9…
G
First you could talk to someone from your own country and get an answer
Then you…
ytc_Ugxpp_k7X…
G
We should all just redraw the ai „art“ shit, but better. Like literally showing …
ytc_Ugx1Q3ZDe…
Comment
No, just *find a real source*.
I don't understand why anyone would ever use an LLM for facts.
Facts are only as good as their source.
reddit
AI Governance
1762515674.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nnjdv4s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nnjffip","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nnkwo3i","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"rdc_nnjiujl","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_nnjkoho","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]