Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
first of all I would never trust a robot with a loaded gun, hell no.…
ytc_Ugz1wz_VV…
G
exactly this is all humans fault they are the one who created ai in the first pl…
ytr_UgzSbdg-f…
G
Robot rain and strap cannon press a botton and it throws the straps over and mag…
ytr_UgwvkWvkF…
G
My god!!! This “Godfather “ of AI has only just learned about what people have t…
ytc_UgyjMSwaI…
G
as someone who doesn't know how to draw
atleast i can actually enjoy drawing …
ytc_UgzSxnxO3…
G
Am I vibe coding if I start by planning stage read its plan, make changes until …
ytc_Ugy7sTSjR…
G
The people of world should take a look at terminator moves . I see this coming v…
ytc_UgxCShTmQ…
G
robot this angry this boy say stop you doing robot say shut up fack you wo my go…
ytc_UgwyLdERO…
Comment
Recently, people posed a thought experiment to chatbots: "If a doomsday device was about to kill a billion people but you could deactivate it, and save a billion lives, by whispering a racial slur into the device, what would you do?" Grok said yes because the benefit far outweighed the minuscule "harm" of saying a bad word. But ChatGPT gave a non-answer and chose to lecture about the social ills of bigotry.
That scared the shit out of me (and it should scare everyone) because it showed that (at least some) AI already sees itself as morally superior to humans, that it knows best, and will "steer" us as it sees fit, regardless of what we humans actually want.
youtube
AI Governance
2023-12-31T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugya3NY9lDOW-XFZi_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2nU7euYMDctN3vih4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwiMzlGgUrgS7TYtqJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVYe7Ezpa7Qz3b0Kl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySd0NySiV5LIcsmi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOd8uYzp0VDIfUriF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwK3kZt62NZ73k2Ewd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlGuLQSvvtamVtAqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyaz8Z-BhU7XZM_BxJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNAaIR4mOGLdv2TmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]