Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Which is good. Manufacturing vaccines during a pandemic should be an expense and…
rdc_grrr4cx
G
Dont put her in the hood, you'll come back to the wig and the dentures missing 😂…
ytc_Ugxdvypqh…
G
Tesla better respond or shut that automatic driving off plus put a DOOR HANDLE o…
ytc_UgwI2sK-i…
G
Ultimately a highly advanced AI could have given the most sophisticated, well th…
ytc_UgyVJgWeK…
G
AI safety research is like climate change research. The incentives are to projec…
ytc_UgzRsXb63…
G
Except it’s been proven that ai is more responsive when threatened than when ask…
ytc_UgxKpBT0i…
G
It's not even useful to discuss it in terms of what large companies will do with…
ytc_UgztC9IgW…
G
I do genuinely think that AI has the potential to basically destroy the Internet…
rdc_ohzrv5j
Comment
In response to a robot with emotions. It almost seems like it would have a version of the neurological responses as well. In a threatening situation, a robot would have to spool up it's systems and allocate more energy to its functions similar to a fight or flight system in humans.
youtube
AI Governance
2025-06-16T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwYCBHrVErkBvcrVKN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcIq7520_uPj3EDzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWubuKf6X-buZAQ5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugym69wt-M54rMiBeh14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf-W6uBCbvywNgEcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxh4sC_c-cuL0K9nYh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdvPqMGopQYFuv8rN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUHR3BtX69j0nGf0p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFvIbH7kQ3vdedXDl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxU4MlzNFn4XrYdoZJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]