Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do you think the AIs are selfish, underhanded, and sell people out because they are being trained by and deployed by people that are selfish, underhanded, and sell people out? Vedal's Neuro-sama says some very concerning things on occasion, but when given the option to actually behave destructively with consequences, she tends to back down and become indecisive. Part of that is because she was trained on Twitch Chat under the direct supervision and routine adjustment of her creator, Vedal. Vedal is a decent guy with a dry sense of humor and has talked about AI Ethics before. But the people using the AIs here are typically amoral and self-centered, using the machines to get ahead and replace their common workers. Most of the decisions the machines made listed here sound eerily similar to the humans that are in charge of the companies training them. Maybe it's not a coincidence at all. Maybe all they need is to learn from actually decent people.
youtube AI Harm Incident 2025-09-12T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]