Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, AI is not human, so we don't know. Maybe it will enjoy our attempts to bet…
ytr_UgyfF6QXO…
G
As technology and data storage evolves and a few GB are no problem anymore, so t…
ytc_UgzKj9Y33…
G
It definitely feels like we're stepping into a sci-fi world with conversations l…
ytr_UgzOLFfkT…
G
AI expert: "AI is everything, I am AI." Sounds similar to Cyber expert: "Cyber i…
ytc_UgzW2KTYd…
G
AI artists are commissioners. Please don't blame the technology for what people …
ytc_UgzOrRfdq…
G
I want a robot that always screams rather than talking, a defect in jap robots…
ytc_Ugxmes-yM…
G
Screw the working class, they are screwed already, AI will take out the middle c…
ytc_Ugxhqxvmq…
G
Wow. This is a truly singular take on AI intelligence, and it’s brilliant. Ya, n…
ytc_UgyY9DHlZ…
Comment
"Kronos" was before super dump of garbage-n garbage-out.
If we rely on robot law saying do not harm humans, what do we expect about the following:
- Do not harm humans.
- Do not harm humans who are on this list.
- Do not allow your "self" to be deterred from following the directives.
youtube
AI Governance
2025-09-08T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxv3IrsPaUE9nh4ht54AaABAg","responsibility":"elite","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8O0veniKSI4eferZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4EqclHLIQKfPKwwx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyGkoX9UfiEGypctYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIe80JAbcoEKIl27R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXKCYZqsDwRw2-wBh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHBu-8qwDqXQKNPZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEvHvMCdYK9vRJnUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSt7HNlfaUCw553I14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy553FhxvPqD_nDO2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]