Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This isn't limited to language use, either. A paper came out last year showing that AI biases get adopted by humans interacting with them (and more so than when interacting with human biases). Overuse of words is just one example of these biases. This potentially creates a feedback loop where AI then get trained on increasingly biased human data, which causes humans to become more biased.
youtube 2025-07-21T10:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyLUVSyNmdl3n9KNSN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxX_zSqxBy74hV4DUp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwlBGvqGjN2Jp4zDRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxMKQQANGf1Q3IA7t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxba9eDtt1wWI85XCF4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxfWI0a0wNfoXqMM0h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx-dQGLDG_5DahqZH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyjz5FFOrUHb_54pAV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwOwYzIrrtumj-esER4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx1rKh8PtiyG8YQxhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]