Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think if AI companies were forced to refer to their chatbots in less humanizing ways, fewer people would be able to be fooled so fully. If they couldn't be given human names, referred to as, "Your AI GF," she, or he, it might stem the tide of some of our more delusional tendencies. That, and education. The more people know about how they "work," the less they're trusted and valued. If you plan to effectively use a tool, you should know how it works. LLMs do not have minds, they do not think, and they absolutely cannot experience. But, it is in the developers' interests to drive engagement by pretending they do and can. What drives engagement more than dependence? Please don't willingly outsource your thinking to a chatbot.
youtube AI Harm Incident 2025-11-08T19:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYNa3n3wkTmQzOwqZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxX2W5IxAIaIeMK2uR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytc_UgzM5Ivg4SlbO422C_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxfmb2wXwsIo7-aIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwTGHvRBAfMi_3mQ9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwWjBKXqExRgvkKx594AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5_b4XHsU-C99o_a94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwtq_2xtJm-1hoZzPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgzU1_5ftrhxY86qMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxdUJNZ5sIC4iinWu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]