Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You train your AI using hundreds of conflicting cultures, morality and philosoph…
ytc_Ugw7ARqnz…
G
The explanation of AI is a long-winded way of explaining that 1 group wants to e…
ytc_UgxTwNLtg…
G
7:45 yes artists should be waaay more aware of all the blue collar jobs that hav…
ytc_Ugw31IH0R…
G
Thank you Elon ❤️ for AI safety 🤖 but you know I don't agree with the Neurlink 😡…
ytc_UgzpCS-Gq…
G
they are firing the managers alongside the juniors in many places right away. in…
ytc_Ugxp6WvbT…
G
WSJ always dig up the same content on this topic , because people love to talk a…
ytc_UgwpC-0bL…
G
Hehe .. yes, because AI still needs to be trained so the better human beings can…
ytc_UgynvjPyw…
G
We don’t need ai 🤖 to take over any human jobs . The more we are supported these…
ytc_UgxVyXpLo…
Comment
Robots, or androids as these are, are designed and programmed by humans. Everything they know down to reflections of personality and morality come from man, why anyone thinks a robot will rise above human flaws is perhaps a little naive.
youtube
AI Moral Status
2022-12-11T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzlrh20Y8BIafy1Tbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-aSb8VJWoeu52EoF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz55I0SVhgOXZK8_Dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRWkFomi7wMVUJXt54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyfE1W78halmwaGIYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkKs0UvLPpmroMAAJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7pD6bNEESmJSjG254AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz23nYaSTu00wo7uh94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx1b1andeuY4p4EIUZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJ7S0JGumfKRDRDLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]