Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No robot will beat me at bricklaying.. not just straight walls proper bricklayin…
ytc_UgyX1Y0om…
G
Humanity doesn't need to be murdered in order to cease to exist. They just need …
ytc_UgyA1-Wkd…
G
\# 5 is already happening. A rental home listing company(or some such) was just…
rdc_lv0r7jh
G
why are all these AI avatars, CIRCLES, Jen Lopez film? ai art programs, etc. …
ytc_UgyeP17gu…
G
Ai is the most god awful and down right disgusting excuse i have ever heard for …
ytc_UgxcyPGl0…
G
Real, I’m trying to become a better artist and this kinda stuff boils my blood c…
ytc_Ugw9RsFcg…
G
If you weren't on the AI overlords' hit list before then I'd be surprised if you…
ytc_UgzGhTeZg…
G
So then , according to this video , and all of the warnings online and all of th…
ytc_UgypFxCdD…
Comment
>a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds.
So, it doesn't "turn off," the AI. They just agree to stop halt further development.
Who is this supposed to reassure?
reddit
AI Governance
1716778529.0
♥ 55
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5u03bz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"rdc_l5u2k4e","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5ukhe9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5u0ena","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5u045q","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]