Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Attempting to artificially alter this will lead to the same result in opposite.
…
ytc_UgxuOW_Eg…
G
I totally agree with this video. We always end up moving toward some kind of mon…
ytc_Ugx_r0ner…
G
stable diffusion already has billions of artwork in its database, this just woul…
ytr_Ugzw2cBze…
G
What is the point of an ai that basically says I think the president of Ethiopia…
ytc_UgyTIKOKB…
G
1:25 I definitely think AI *can* be used that way. Simple text-to-image isn't ev…
ytc_Ugyr85iQp…
G
It’s not that realistic, it hardly does anything except attempt to smile and loo…
ytc_UgxIOG8HY…
G
@stephenpeterson7558 thanks for your reply. Im assuming this is an LLM answer.…
ytr_UgyR8SvVs…
G
2nd Translation:"When drones start shooting & bombing all common people, it's no…
ytc_Ugys9iRln…
Comment
(I’m no expert in this nor do I know how ai works) I think we remove/turn off the dangerous parts of all ai, and set up heavy restrictions. Get rid of the harmful ai companies. I personally don’t want them CLANKERS running my earth. We created them we can control or remove them.
youtube
AI Harm Incident
2025-09-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU4YoVixwwQOmemSF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN5k7k-adeyuayMB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzClODvkS5hAqK_ryJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSS4kh_HHw5anKNhh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnUNqqyHHHNlQlMwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzEynszzih0LqWG7SJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKn6m6iwbGJNu_nCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzgBEVDH9rpEl9Uta94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMGclaKu54VCXifcV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzwz0QBVkIFmfjbZKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]