Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who defines improvement and what makes us believe AI is an improvement? Why do w…
ytc_UghF9MR_Z…
G
i really appreciate you! ai slop images have really been getting me down but i f…
ytc_Ugzpc_hNK…
G
Do tell us how SPECIAL you are and we'll compare how SKILLED YOU ARE.
Last Fall…
ytr_Ugw5ERhyZ…
G
>Why isn’t it possible to create a system where everyone has some work, with …
rdc_gsqvcuk
G
Ai because the doll couldn't be dead cause she is an electric and robots is elec…
ytc_UgwxYORLG…
G
but it's still correct. the great potential of automation is in the potential of…
ytr_UgivNXalc…
G
Alrighty, half way the video you fall down a route of "AI can't think", as if hu…
ytc_Ugy20-5Sx…
G
Bullshit, sorry. But this is soooo far from the truth. This is just not at all w…
ytc_Ugz4xQU1Y…
Comment
In general, it might be better at making decisions that optimise for some criteria, but that alone does not guarantee that it will be optimising for the things that we want. Putting AI systems in such positions of power is just asking for problems.
reddit
AI Responsibility
1648685822.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jikskli","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jk8wf28","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_i2s2klz","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_i2s4o9c","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_i2sa4tu","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]