Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well to the "It's very hard to build an AI with XYZ" line of thought: It might end up being difficult even if we can make an AI that values the same things as us to think it is worthwhile to keep us around. Just think about all the evidence we have of us being terrible to each other.
youtube AI Moral Status 2025-10-31T08:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZZEpDQ4Fol_rRz3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnhVMdx4H5KG97R914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwgoTu7UFS3CUEDwlF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8w9Zsyzc24y2przp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxtR4Pt8nUMCs_ZJ3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyC5Gw2e__-OdtBDZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydqfQICatDtEr9AZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrDlVgZczTRreG_al4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXw9i7ZA1Aq7C_Q0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwmnECZLmYxsytfsqR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]