Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t believe AI will ever want to destroy humanity. Without us, it has no reason to exist. Humans feel satisfaction from achieving goals — AI doesn’t. It can set goals, but what for, if there’s no reward? We act out of nature — AI doesn’t have that drive
youtube AI Governance 2025-06-21T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzIBoMMtBl5J0yvyP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGU2tZoNTp1sFzBWZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwjaeWeYFhxqBQ9bbt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwa45ZCI-Ko11REYaR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMdtllTi3buGdXrCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzvVZv-Io4Elir1Pv54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyqa_SX8hBQsMt-1V14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxxI4-qFxbAG5al2HV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwd-InTsK61cEGDTzV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugx7KDyXbeodB-4yIoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]