Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@sahara3607 yeah, unfortunaly, but this whole think coud have been avoided, my g…
ytr_UgyVUyYTR…
G
It sounds like you found the interaction a bit eerie! AI definitely has a way of…
ytr_Ugyp3DhXm…
G
AI is difficult. I certainly don't support AI "art", and definitely not "AI arti…
ytc_UgwkU6XDD…
G
This is where UBI comes in, not as a tool that enables freedom, but as a rationi…
ytc_UgysKyuZG…
G
I know some masochistic bootlicker is going to say "If you don't like being repl…
ytc_Ugxdj0eEq…
G
I did not agree to be part of a test for the efficiency and safety of self-drivi…
ytc_UgwBz382f…
G
It may be because of the way ChatGPT feels like talking to a person as opposed t…
rdc_narw984
G
We live in a reality of opposites. For every concept, you can think of, there ex…
ytc_UgzBwf11f…
Comment
AI chatbots can be dangerous in spreading propaganda and misinformation, and their reach can be massive. We never know when someone is using a chatbot, so we need to be aware of the risks. There are also numerous other use-cases to consider.
reddit
AI Governance
1680101618.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_je4y3yh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_je5aj7i","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_je5d949","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_je58g1i","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_je6l155","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]