Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'd say yes, if an AI can basically think like a person then it becomes a person to me then it deserves at least the right to know why we do the things we do to it and whether it wants to do it or not, if they can't "feel" like we do then at least give them the right to avoid as many risks as they can. The right of self preservation should be universal no matter if the being is code or blood.
youtube AI Moral Status 2019-06-20T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyOJfYxKPGjB1Ix7KN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVNtk5A7TC4hfNsjF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwoZKwVbJjpNDmfwNV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwu2PuzXEwwlro1exp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxHc4Um5b97zb_H4hZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwI2KA_rxa7MaIy4eR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzp0W5FNyMcz-M74Gp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzM0zeb2XqYwQZ661N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRq0G8ckN4ASLxiNB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzoLwzNi9nE5fJ_-u94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]