Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My first recomendation would be that we don't create these problems in the first place! If we do, then it's stupidity at its finest, because now we would have to worry about more morality and more arguments in the future! But...if a robot was to be sentient, then I wouldn't have anything against the robot itself.
youtube AI Moral Status 2017-02-24T15:5… ♥ 46
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgivVbsw4Xna5HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj_ypQQgCMlY3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgimjZIms5XFuHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjyZ86QaI4isngCoAEC","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj_RptSNZh00XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjzbQdL9YKm8HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UggfrijmLIHsQ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugi7_U1vyGWcxngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgiBd7HdCU2ShHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggTp276p_PxP3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})