Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"ammoral" Lol. I love when this term gets thrown around for things that have a different perspective. They have a moral perspective. Theirs just actually makes sense. Whereas ours struggles to solve simple problems like the ethical trolley problem... . The biggest "problem" with AI has always been that it reaches the conclusion humans are the problem. And rather than accepting that as the obvious truth anyone with 2 working brain cells could figure out. We REJECT this conclusion adamantly. . It's very symbolic to how many people in society act. They don't care about the facts or the truth. They don't care about the consequences (unless those consequences DIRECTLY affect them RIGHT NOW). The morality these people form is often centered around that lack of insight/knowledge. And they reject anyone that challenges it. These people are more dangerous than AI will ever be.
youtube AI Harm Incident 2025-09-12T15:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]