Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
all of this reminds me of that little talking blue ball named Wheatley, it's kin…
ytc_Ugw8mU08x…
G
Not a single reference to zombies. BOO!
If one assumes that other humans are pe…
ytc_Ugi_L9Ps1…
G
Sad to see tbh. Looked crazy af though as from movies now reality , but giving a…
ytc_Ugx5alMLI…
G
Doctors are not the big driver of medical costs.
Is the AI gonna eliminate the …
rdc_jw68qvp
G
There's an option you can disable for (chatgpt) AI to not use ANY of the given i…
ytc_Ugz1pzIen…
G
@neiklen4320 Where is your empathy for the AI engineers who spent hundreds of …
ytr_UgwJImAiM…
G
I think the intellectual will just shift. Think about something like animation. …
ytr_UgxJMdT_U…
G
Hey I also knows autonomous cars but it needs to be controlled and if it's contr…
ytc_UgyxLpPv-…
Comment
>I don't really trust Waymo's self-driving,
I trust humans (excepting you, of course) far less
reddit
AI Harm Incident
1765298007.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nsz0xob","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt87qey","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt0a33f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nt0agtr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt4wf33","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]