Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The worst part about all of this is that it was entirely predictable. Large language models give you what you want. That's their job. To guess the words you want to hear and what order you want to hear them in, based on your prompt. It's main priority isn't safety or truth; it's giving you what you want. It sets out the achieve that priority at any cost, it seems.
youtube AI Harm Incident 2025-11-08T19:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYNa3n3wkTmQzOwqZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxX2W5IxAIaIeMK2uR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytc_UgzM5Ivg4SlbO422C_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxfmb2wXwsIo7-aIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwTGHvRBAfMi_3mQ9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwWjBKXqExRgvkKx594AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5_b4XHsU-C99o_a94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwtq_2xtJm-1hoZzPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgzU1_5ftrhxY86qMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxdUJNZ5sIC4iinWu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]