Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If the ai didn't want to destroy all humans before, I think it might after this conversation.
youtube AI Moral Status 2025-07-21T18:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwV_xS5-TSqwWeSgCh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzmIVoEMI7aCrAmtB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyK1nAw6Lv6zDtVfjt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlYVhV20ftLMOXQel4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkBulSqcZmkbvbH6B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwCzrWGQPBUwPZZ3jN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznEANtOkNHIrAwTrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnd1FRSVOP7kYBjHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf7IpL6rjw0aPvv8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwAB_w5FlJUAAwJhs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]