Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah a lot of these models are trained on real CSAM, because it's hard to automa…
ytr_UgyaMLFuP…
G
A man in South Korea was crushed to death by a robot that may have mistaken him …
ytc_UgyRG0LGm…
G
Vice News did an excellent excerpt about this situation in SA; essentially the a…
rdc_deumaeu
G
@onetofoster yeah, I know you just said the point that will benefit us but we ca…
ytr_UgyYe0HYW…
G
Exactly, consciousness doesnt operate outside formal systems, it simply jumps to…
ytr_UgzuN5Rhb…
G
What is stopping AI from “liking” humans and humanity? What is stopping it from …
ytc_UgwSpSCeI…
G
My drs office used a..I...to call a reminder to go to his office .....I do t lik…
ytc_Ugx8ehP9A…
G
I’ve already been served by a robot waiter with an electronic cat face… (it was …
ytc_UgyFNCxaK…
Comment
I don't find it odd that AIs would harm humans to prevent some other larger harm - we think that way. We do it all the time with every arrest, every riotus protest suppressed, every war. So, it seems pretty much within the realm of possibility. I'm not shocked by that at all... Do I like it? No, I don't feel that AI should have that power unless carefully contained and controlled - such as in military AOs.
youtube
2025-11-01T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwI53aj46GTuVwJRjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3d2VbXoCyKPzhwd94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwqDlhcyFg1o3G2IeZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugztyh_u4e3npKzqOZp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxwX6tXqQKsWc_HhDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz9M1PBELzJa2YrPVB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHRzLW13DxO6Z0OA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzJBCiMeuRhZn5_DXd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyGSJi7qvkB7-UuPv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDBP4XZDdwEZfZ5lZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]