Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We may be missing the window to protect ourselves from AI. People think longitudinally rather than logarithmically. Therefore many people believe we have at least 10 or 15 years. When in fact because of logarithmic growth we only have 2 to 5 years for this AI risk to become apparent. That's likely not enough time for us to make a correction....
youtube AI Moral Status 2025-10-07T20:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwqQ8GcRk4ObtwKvk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCDION-jNLO07p1hB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNJKY-sprOK7MRqh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx23gycq8nvEy490iN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxUPR9zuxIk8_wio0Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzApPw4V9xTGBQo5FB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwkLxI0_EhsfsFN6Gt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxaIneG1a3K5cA2HyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR2aOTHRlJXAWn4vZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzi3PMA3qMeqcSCX4l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]