Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes but we do not have sentient machines in the sense that they would be entirel…
rdc_cqitxxx
G
Kebirnetik arganizm temr skilit ustiga rezina qoplangan terminator 101 sestims m…
ytc_UgyLCfrce…
G
Stay ahead of evolving consumer trends with Phlanx's caption generator. Its AI-d…
ytc_UgztCyOG8…
G
Ive felt this and since ai is happening, its our responsibility, parallel to a p…
ytc_UgyDva_9r…
G
I actually do this, I don't always say please or thank you, I do when I'm like W…
ytc_Ugx5309Q7…
G
@ciapsychosis well what we have essentially done is give it a small "instinct" t…
ytr_UgxASsktv…
G
The rationale to be suggested with the mass migration of DATA CENTERS to advance…
ytc_Ugydy8U2b…
G
Actually technically in real logic humans are smarter than AI because in order f…
ytc_UgzuGMc28…
Comment
Robots are on remote control their AI runs on a program she's being told what to say. She is also being told to make face expressions. But they don't want you to know robots only run on remote control for movement they are not dangerous. AI robots will not hurt humans. As soon as you tell a robot to harm or destroy it must be remote control to do so.
youtube
AI Harm Incident
2026-01-09T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyM0_TMhNYJS3c5cJx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbFR_c9k3E66FFXZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoK08EFi9iCUWA5qR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgytyOPonzHX6dLO2d94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbBjRirIFsbyC08qx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiuNjM32pUn2vfEtN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLqoEJ8hPbnl4gXgV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDlgXwvpvQajZ3olp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzjxBhcQ9D0nBhtrch4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyo_Bco4-QN4Uk96k14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]