Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When someone creates and programs the killbot are they held responsible when the killbot kills? What if they create 10,000 killbots that go on a killing spree, still no responsibility for those who created them? It's stupefying to know people are working day and night, being given ludicrous financial bonuses for making life on earth hellish. Maybe for humanities sake some people shouldn't be allowed to be billionaires, it's too much power. First instance of a robot killing a human was at Ford Motor Company in 1979. 25-year-old Robert Williams had his skull crushed in. Many others since then. Might be tough to go hop in your self-driving car after watching this video. So much intelligence, so little wisdom.
youtube AI Harm Incident 2025-07-26T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxrzfEMPlbTNDUhgkR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"resignation"}, {"id":"ytc_Ugw7-KaK1bUCHZi_WLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwJRU-ZqvE3bnmWfMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwNfeK5HxcASvu0xqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjeGfkkpINABwCy6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqKjfWqp4bJ4zem2B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyj0TRVPMWmT6BBpCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz34l0MumeYuDyTCAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxzq_GaEMAq68_o7iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyO8ZH7IbCQ3BeX5AV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]