Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let me tell you sometihng. Thech support is not going anywhere. Well because you…
ytc_UgxSeZbun…
G
@boardcertifiable Never be sorry for something that is important to you!
And by…
ytr_UgxqOXcy-…
G
I felt like I was just told that the world is racist and bias, so we need to mak…
ytc_Ugz20O5vI…
G
or how about this? Fuck self driving cars and tighten down on bad drivers while …
ytc_UgiDITa8m…
G
Maybe the issue isn’t what ChatGPT is. Maybe it’s what we thought intelligence a…
rdc_mzy2m5g
G
I'm full stack developer because of AI planing to switch job in cyber security a…
ytc_Ugw-lbSbk…
G
How stupid is humanity thinking we can control a AI that is vastly smarter than …
ytc_UgxLX5bMs…
G
@notasoap I know it's not a good thing to happen to the art world, but I look at…
ytr_UgwNecYzy…
Comment
This is why sam Harris is so insufferable.
“What if *insert impossible scenario* therefore x”
An exhausting pointless way to try to understand the world.
reddit
AI Responsibility
1677106544.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_j8wf290","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_j9lzz62","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_jazbzhe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_jccolzx","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_jdkcidg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]