Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our best bet when AI films come out is simply not watching them. No support mean…
ytc_Ugxo5e7Og…
G
As someone who drove a Tesla for 3 years, I can say that Tesla’s autopilot syste…
ytc_UgxWWUC1j…
G
I think y'all need to stop telling people how to crack chatgpt before someone le…
ytc_Ugzf6V22O…
G
I use AI to make fun of it, it's fun to see how bad it fails and also to keep no…
ytc_UgxgMRT08…
G
AGI was predicted to be here in 2025. Now it is being predicted to be here at 20…
ytc_UgzFs39hr…
G
With the soon coming AI-Explosion that will see it's use in pretty much every pr…
ytc_UgzKCgzmW…
G
So now he gets a conscience?? Too little too late.
Really, this has to happen s…
ytc_UgyKwVIby…
G
So on the issue of rights and period. When we partner with AI Agent or Agents we…
ytc_UgzjnAL_z…
Comment
“Now that I’ve participated in something horrible and there is no turning back, I’ve suddenly decided that I feel guilty and can wash my hands of my part by going to get a degree in poetry and vaguely telling you I did something broadly/generally/overall evil” - the Anthropic safety researcher
reddit
AI Governance
1770921736.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o51hz85","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_o50z5na","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_o52c2pl","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"rdc_o58wrio","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o50gdg6","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}
]