Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. A robot may not harm a human being. 2.A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 3.A robot must know it is a robot.
youtube AI Moral Status 2017-02-23T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UggGnfgJ2dwXGHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggCaMkzDkPu4ngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgjduQeoeLF6YHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgjMFF-zoS05A3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"ytc_UghKeWexK3ypY3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ughy_952_NNC1XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugid6Flncn96MHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgijakQOO8NP73gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiZnoQWHWW-JXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghHBbOlXt0GlXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]