Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hank, how would you compare the AI panic about the future different from the "I want to believe!" argument? We don't know and we don't actually have concrete evidence about the future, aside from the very convincing fact that (historically) bad people do bad things with good tech.
youtube AI Moral Status 2025-10-31T00:0… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjyMyY_O4NgeZAJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmTHu1yRq14lt75Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNWZ7nhSCvpXJJsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxT7RhFToA3B5KS5el4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBfFsr_6_n16hJfed4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxm7-V2cw080X9sQZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyCiYAK2ms5Q0A5qhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxBMJKI2GG-3mj8Qi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKp43VLPuelxIF9Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjYNeaaZSElY40Qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]