Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i hear alignment a lot when talking of ai safety in the future....aligning the ai with our goals......but when does it go from ai alignment to humanity to humanity alignment to ai......? maybe thats why the heavy push on ai to the public....creating human to ai alignment
youtube AI Harm Incident 2025-11-03T16:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgyeVxaW4AKzlKJDMs54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzeo6WJNlyVjnrI14p4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySj_oOntCy23vvP6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyG8H9LeCt-bHDQUJ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQE0rclNfERUZptpl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwJmzAEZEbZCN-Y_nR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKd60bZgDiySgycVx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAWXYHDYPZtoA-So14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwzvu83ZAq5uesojhZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyWDWldb2emusy0hiB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]