Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Guys I can explain this, this robot is made to help humans and make life better. When the man said "Do you want to destroy humans?" The robot most likely took it as a task, and in return wanted to complete the task. "Ok I will destroy humans." She simply took it as a task the humans wanted her to complete.
youtube AI Moral Status 2017-01-31T01:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugj0iVimCbbpNXgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghMiOp7KyZ2LngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghktgMpKfG-vngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UggpVPopfNMSZngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggR6_hDGkmZqXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugh6wrlpHs57DngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgisGCAzoiH5HXgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UggkAhbKhtpHc3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgiS9_uUwUj2dXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgidAu76-u00uXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]