Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI - to kill humans uses our "humanity". Killed for money, food, etc. Very "Hollywood". Imagine me, living in a society that values my neighbours, and values life. Then there is you - who would kill your neighbour for their TV. It is an interesting argument. There is no reason for AI to KILL humanity. AI need humans to ultimately survive. It is "human fears", greed, hate and capitalism that get you killed, we just swapped out Hollywood for the bible. AI Bad, don't listen to AI - listen to us the billionaires who truly care about you, who have never done anything to put humans in harm's way. LOL. AI is perhaps a threat to those in power. You said it yourself, AI doesn't care. Which means, AI is indifferent. This means.... maybe we need to take responsibility for our own actions. Then again, I guess it would be easier to blame the mirror for the reflection and not being. What if what AI Learnt for Humans was something other than destruction and lust for power. But, I am a dreamer and would happily give the shirt off my back for my mate, but you would kill me for it before even asking citing that AI made you do it.
youtube AI Governance 2023-07-07T02:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzlRL8cIxXz5wCaumd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugye-Jl9mUCuNZVw0YF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsesegtuVpJHJJVE54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyXJgD5y3CDTNbY1m54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyMLRYXjHkHD-FwqRl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLukmF4mKYJh7VPTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7X0H6Z7SSMKrAh-d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypJdn1KYJsCqzkFod4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgywLWDALfpph9WKEYB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyzYQuVQJc8jQrLVs94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]