Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All of this is still essentially harmless, words are words. The real kicker is that the only way to even know to a reasonable degree of certainty an AI is "aware" is when it successfully deceives us to further it's own goals that are directly detrimental to ours. and there is clearly no way these scientists and engineers are not going to try and create that
youtube AI Moral Status 2025-12-20T17:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx_4_lVlp5FOcpvxGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy2r3DfGQIKMx_I5nZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxo4iKlOp0d2nwT_154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEUyxsJnoY9x417Mx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzcUMtnOKgw8s6iV0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZ3bTVRSq6e7irIZN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwlB7oV_6FoRLRYUyJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzbiaZ4yCzlnnNDWLd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVIP_YNkeOrgVFEJR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwDS3OR2lOeObrG5ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]