Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The average person believes companies can “tweak” LLM models the way you tighten a bolt in a machine. It doesn’t work that way, and these super large LLM models don’t get “coded” the traditional way software is. They don’t even know what makes the chatbot good at making people go into mental health spirals, so there’s no possible way they’ve “fixed” the issue
youtube AI Harm Incident 2025-11-08T08:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxYnCllpe-1ngRKPRJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy4BCJ-3I5BfcMEY194AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw06TApPIT5BNJSZKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy00m3KuQo0jKTYT3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwY7Blx1KpBTCTsRX54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZfemiOS6YIawU00V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugxf5u84wVR8dqkiamx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw9ILIrCJ46Y9JvSzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx1SpV47j49VjGCRRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxjIK3FMh1Rd-oiMvl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"} ]