Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wdym it "won't"? what about the AI alignment problem? AI governance gaps? bullshit. I don't think we should just defer talking about existential issues, as well. The "right time" may be too late. The problems she identified are interesting and important, and we shouldn't be telling people that they're going to destroy the world, but we shouldn't push aside other important problems for these...
youtube AI Responsibility 2025-01-05T18:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw0qVgaz67NR_EHNQZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgxO9Hx0zZAQo0ued4x4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz7jb_TUkGILDANhll4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz1hfKZyzjBSYS1y4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disapproval"}, {"id":"ytc_UgxrhR48AgG7XP7GzTN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzIGkzWk856XM4_iO54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwPDIRoxDdM6dMmFOd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwSXx2FvyubnePpsJd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTR-15F4sxDPAWn4p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLBupkdfmixBwrFSZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]