Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing I'm surprised isn't brought up a lot is gaming the algorithm. When asking questions like, "do you find drug use harms other people other than the user?", it's pretty evident what the "correct" answer is. It's not like the algorithm can reach into your mind and see what you _actually_ believe. So you're pretty much free to answer per the algorithm's expectations to get the score you want.
youtube 2022-07-30T22:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx0rOf-YsZlQXSAAdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxs47swESkB8PSnEel4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugywyf6HJwovz8if_sx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy5uZ1D7jsHUe_haLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-JwPV4YlWtBD4EV94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxsx1wtoi6wJZmyyAB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxw49NGSolPbowh-714AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwNQz4Cle4NHtPO_154AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3obXyY08LbIRDApJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzrdeD_eXFN5jbReXJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"} ]