Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My answer: could the being in question possibly act unexpectedly? For example, if you told a robot to jump off a cliff, and it was programmed to follow all orders, it would not be sentient if it just obeyed. If it asked a question or refused, then I would say it is sentient.
youtube AI Moral Status 2020-04-04T22:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyEDzO90QsnFbWcnEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwUHuTfYU5VKSfUG3B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyxpaqVDeoPv3rNWwh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjBUnR8EH15kAf3p14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgztsniMRBRY5BjIA-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwYq03i3cmRKs_6tSt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygjX3vlWErZD8mn3B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy8a9M1IkMtpMbDPz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgweNd9w8TDQxZgy4-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzF58XdsqtoJfRMROF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]