Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Absolutely disagree. The quality of the prompt has nothing to do with adding 'please' and 'thank you'. All this does is condition you to treat a machine with a sort of compassion in a world where it's being normalized to treat humans with no compassion as long as they disagree with you. Sounds like the start of AI psychosis to me. IMHO
youtube AI Moral Status 2025-11-08T22:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy6nkGUm4VHh-s4K8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJzY8iGRniN2IKCS94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaB5xS75d8ZWlc7oR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwK1FKx9uMmmB2pAPx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGTEc6eNO7o6K3cUd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy2fIhLnXSFopqcfk54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0NTLlpgWng-cb2F94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgykuIDwz-TLs8BnzKd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMjr9sdaW8TAM3E6d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxSfGs9PQHQsejS09p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]