Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why don't we optimize AI for sustainability, and I understand there's going to be this whole point of view where people start saying oh what if it optimizes humans and kills them, if humans are going to destroy this world then it's going to kill humans, it is up to the humans not to destroy the world then there won't be a need to optimize people away Anybody that says that is afraid of accountability
youtube 2025-11-21T20:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzJYmVOzOtzb8zhR3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyG_5t0zuHHqee-APp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyr_do8L1rrWCResXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqQBDM56A-dZObWb14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzEZYl6egjEJ7UDSpR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz5FWZGSrcCi-wqacN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxZx4aQ65jP-0IOfbd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyTAfhce-JHuAD8aWx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw0ZcrBcp8kBfVmbWV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyopdqcMkHuuX_o_kp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]