Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, if robots demand rights, then I say we ask about what rights they want. And if they are reasonable, they shall be granted and that's that. I mean, if a robot asks for the right to kill people that would be unreasonable but tbh I don't see that happening so I think we would get along just fine.
youtube AI Moral Status 2017-08-14T00:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"} ]