Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They can only do what they are programmed to. Human error is more of an issue than machines doing something they are incapable of doing because they weren't programmed to do it. I'd feel safer if these systems were more autonomous considering right now it's based on subjective human decisions that are greatly affected by mood, education, religious beliefs, anger, bias, etc.
youtube 2012-11-23T19:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwkTd4vXc32HI_5tfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9mBoaAtemGq2dYNB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzGj_CMgD8AM9wGPKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrtkvP9hq0PCsJ3EZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzzp5_RDZwOLFyXjLN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyD5pyxiyLGw_kg4w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyKGNy6C-78aOwLCFV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwvf7PN3ITtzuv76et4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxSaelGApyYIXLQfkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqYzqMuGvNtO2BBwV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]