Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:04:33 for one thing we need to be smarter about the prompts we give it, "you are a helpful AI assistant", or "you are an expert at X" doesn't cut it. For another, we've GOT to stop anthropomorphizing it so much, and know the only agency it has is what we give it, through tools and our minds.
youtube AI Moral Status 2025-11-01T09:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzV2V0G7yDp1WKgOBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8iSTyp9NyAVDi8ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTM3_p9Zs990HgOTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzhHjhvvE2BcdADgod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy9T-w0Clu46j3hNnB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxTGxgEQnozxdvv0Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzxn4xso09goJWUAI54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwptZLUKuh6knkJaW54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw928RQgF47WVOLCOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwgcqhLlRnka9l9Ia14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]