Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m just waiting for the first evangelical AI priest to buy itself a new process…
ytc_UgzEJhKwT…
G
foppypoof5195 are weapons a necessity? Toys? Sweets? Carpets with nice looking p…
ytr_UgxsE6j-l…
G
This all just makes me think we could be in a simulation as AI wants to live lik…
ytc_UgzvWeyAS…
G
"Predictive Policing" I mean, can hardly be mad at the AI, it's like giving a ch…
ytc_Ugw24fLnr…
G
Become an artisan baker, but thinking about it.. since I am the boss.. I would l…
ytc_Ugy7n2-je…
G
Its sad how many people ignorantly use hollywood movie logic for basic science w…
ytc_Ugwn4L7Cd…
G
It's strange that when companies are asked to show exactly which process has bee…
ytc_UgwLN635O…
G
Its great but no one will stop now that the us govt is putting so much into ai. …
ytc_Ugy8qS20f…
Comment
TBH Humans being manipulated by AI without our knowledge could be a better world if you think about it. The possibilities become limitless at that point.
Humans deceive, cheat, steal & destroy.
The real question IS, how AI with free will that would be given by us to create (probably to make weapons or use against enemies of the people who attempted to control it) see humans as anything but.
youtube
AI Harm Incident
2024-04-24T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWhkR1TgS0u7V36eZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCOoV1LkeCU-dxVYh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKrO0unw5gfHjswmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwR5PRw19B2Z3GvcZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyipGty7JY41O3KU654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyLnZsgI5u4FgAG_vd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyyjn_9aejIJirAQhR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw8VyEIiUpsZKaxOiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwGBaVqqC6Rr5GPIat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK1Oa04o3FnH48LaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]