Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The only issue I have here is that the whole thing people bring up is that "you can't program morality" and that's what makes robots attack. That's fucking stupid, because in most cases of fiction/hypotheticals of these robot uprising situations they deem humans immoral. That is the robot making a moral judgement. That's fucking stupid. There won't be a fucking arranged robot uprising, these tech giants are fucking with us for a laugh.
youtube 2015-07-30T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugh3PVktsKFg83gCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Uggo0fVOZWLI5HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggmztviY8m9tHgCoAEC","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgjVOjIc9xnc4HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiLykdI4thQmXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugjk_By0sXzqZ3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghmddVZf9EUMHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggJ1ITpg8SLrngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggfhQKYR5czPXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghkLZ3Ypih0MHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]