Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Heard someone mention the AI should be assessed with the same thoroughness with …
ytc_Ugwp3BtK5…
G
44:22 - 44:30
Ok having watched the video now, if what you mean when you say “AI…
ytc_UgzDMLGKI…
G
AI “artists” don’t seem to know what sentimental value is. Saying AI is also an …
ytc_UgzStHHqi…
G
>"the limitations on how many vaccinations are being made, that's based on ho…
rdc_grsz0tm
G
You and everyone else pretending like it’s more than just a program. The point i…
ytc_UgwKBnOek…
G
I mean, I think you guys are kinda helping my point. Neither of those situations…
ytr_UgiiYSCGt…
G
AI & Nuclear Weapons Not going kills or hurts anyone but only the world leaders …
ytc_Ugweo2aph…
G
The only way I can see self driving vehicles ever being safe, is 1.) If all vehi…
ytc_Ugwxsi4Lv…
Comment
As if the first thing a rogue AI wouldn't do would be to disable the kill switch. /s
reddit
AI Governance
1716791705.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5tvbik","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_l5umuyk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5v0co8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5v1fy0","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_l5w1yx9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]