Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, those are older tests, the newer one explicitly stated that the AI should put human safety above its goals.
youtube AI Moral Status 2025-12-16T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyJn43x5FTaZOcpb9F4AaABAg.AQus1lQB9gJAR1G0QIoVSs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxzYb1OQkbEBvhHF614AaABAg.AQsCHqUeHBwAQvFLnT7o5G","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwlhaVp9GabDdGwgZd4AaABAg.AQrspJ6tmaSAQrvWMmLG_K","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugw6xwjArXVoZ3R8gOB4AaABAg.AQrTl1_A9MJAQrW8V8mExj","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxQDg74duZmCE1M3KJ4AaABAg.AQn_BPrzdymAQndNxn63UM","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_UgwyT013V4Be3OifIL94AaABAg.AQnTnzC3pfPAQnV3ylqGc2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwICutqsHEILkIBKfh4AaABAg.AQnQ8Al7C38AQvi7XC5xc6","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgwICutqsHEILkIBKfh4AaABAg.AQnQ8Al7C38AQwyyI47rsv","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgwICutqsHEILkIBKfh4AaABAg.AQnQ8Al7C38AQyBaRrLqQq","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyRNBv2JguQ0NS9nH14AaABAg.AQnEF5Ud18cAQnF9nIedQJ","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"} ]