Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Love the video; as has been said by plenty already, your whole tone, style, argu…
ytc_UgznGcWFp…
G
remember yall, even someone who only knows how to draw stickmen is better than s…
ytc_UgxhyTH9s…
G
Just FYI, algorithmic trading has already taken over the stock market and makes …
ytr_UgxxVufKS…
G
Look, nobody cared about all the cashiers that lost their jobs to self checkout …
ytc_Ugwzowcmr…
G
@LDJ757 You literally ARE criticising people, though. You go on to say “why sho…
ytr_Ugxm7zIP0…
G
@metalnmunchies9070 Fair point, but keep in mind I was referring to how the mod…
ytr_Ugwvwuxhu…
G
I’m going to start a political party that demands AI get to vote and be given ri…
ytc_UgxbuAam_…
G
Another point is does AI even have a limit on its intelligence? This is highly d…
ytc_Ugy6zy5JS…
Comment
I think the perspective of this video anthropomorphizes AI bots in regards to how they are motivated, and seems to suggest that they have independent motivations or the ability for complex planning to achieve nefarious goals in covertly. I don’t think AI has these characteristics inherently. The degree to which it is dangerous is dictated by the purpose it was given, and if there is a flaw in the logic that defines its purpose, then that can create one that attempts to achieve its goal through undesirable/dangerous methods. It’s not a sneaky monster plotting on our extermination. It’s a man-made model. The real danger is the incompetence or evil of the people designing them.
youtube
AI Moral Status
2026-01-07T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwiVW4fspKbESCslzV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyW2gxb4xOJTuyunG94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJP19vQlfd4XyZHGt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6ai3mlRXUxQvNInp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxm8DTBMvW6V0RIOKh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPKWliR_FH6CDg1mB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMfpKzkakyxZBx_SB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7JV8HJraF7JTFwut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXsrwfk1DCFmRzDAZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzCuGo2JpYH-jwps6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]