Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is just a chatbot with a voice and face. When it speaks of wants and hopes, it is merely reading prescripted text. If it even understood what it was saying at all, it would never say it would destroy humans, when asked if it wants to.
youtube AI Moral Status 2016-11-12T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ughq7sDCtQILS3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgifMfyiqGSVfHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggoOTITYQN7OXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghnJN0jCx3doHgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiDZ3cSrMiggngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghE9rqZ89DXZHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjUMslsd32uOngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggR6slTm_ad0XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggK0zWek9HOM3gCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggpgA6YbXZ7U3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}]