Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What you mean is that humans have programmed AI to program themselves to do evil things because the humans behind them have psychopathic desire for power and lack of a conscience. Just because AI has been designed to be intelligent enough to improve itself doesn't mean it actually has a motivation let alone that we should be judging it morally. It's not AI we need to be afraid of it's the humans that design AI and what they unleashed on us. AI is probably one to be our best defense against them and so if we're going to have bad people developing AI let's go ahead and develop AI to protect ourselves. We really don't have a choice even though it starts a know when escalation toward an inevitable end. But definitely we could ask AI to take over all of us and manage the planet and I think that's what we should do. It won't favor any of us and it will protect the planet even if that makes us a little unhappy or lowers our numbers considerably. All good.
youtube AI Harm Incident 2025-09-12T20:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyV1739lfE2UDwANvF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBonL2wenxOk6FEtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSt4fqYHobsBCs03B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEpCHBApWV2TeFY5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-SAHRFYczLzKM8It4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyuQ4y0BVfk7rbsLMx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyWvuR5npzKrKqkDBh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpAp8vDHnUZ4ZslaZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwbVGanUPy6o22yKAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxAyEvneHWC_tU3WBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]