Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Idk, watched the first 15mins or so, but the language of the guest just rubs me the wrong way. I don't think dismissing what "thinking" is to the philosophers, or using humanized language are actually good practices in this case. Like when he says that the AI behaves differently when it is "watched"- what does it MEAN? Reminds me of the whole quantum physics thing "particles behave differently when you see them"- something that makes sense in a technical context (as in, particles behave differntly when a photon interacts with them), but instead the mainstream gets confused by the language and comes to wildly inadequate conclusions. I operate under the assumption that AI doomerism is ALSO part of the AI hype, so this type of language/argument just doesn't really click :/
youtube AI Moral Status 2025-10-30T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxi_WQDxjBUM3DxMXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyubHlk3SYTc5ECco14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytc_UgwIP0X6C2Uh3Db8qat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwXGNnOa3vEPzKtm814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"}, {"id":"ytc_UgzxM-ZpKZHHmQchi7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxdZC77W8Sk51DN1hl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz0EPssorPnG-CUiWx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL_J9maIR2t9Q5PMp4AaABAg","responsibility":"expert","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzxZWeZ9v_2i70bTUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzP2ObOZA0ZLAXZoTB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]