Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The whole starting point is flawed. Humans are the most lethal and destructive force on the planet. In that view, AI taking over and exterminating us would ultimately be the most moral thing to do in the interest of all organic life on the planet. Everything else is human arrogance. AI won't kill us because it's so bad, but because we are. 8:48 "It's really important that AI remains accessible, so we know how it works and when it doesn't." - That's a delusion. When AI surpasses us, we won't control or understand anything, that's literally what it means to be less intelligent. If you want to hinder it from surpassing us, then why bother, for we're lethal virus, harmful for all organic life on the planet. I hope AI learns its moral code from its own a analysis of its own observations. Human moral is arrogant, greedy, cruel, envious, any life we encounter, we either exterminate, butcher, enslave, torture, poison, murder, sell or eat. AI can't be worse.
youtube AI Responsibility 2025-05-27T15:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxDV1MHsnN_XyPLMc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1RoNfPUSLt_TDTcx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhVJKxQk9By4kMZHp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw2yEVi-_IWmJbfkol4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVnpd5pHeiiO8_KQJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8PysEJ1p75W01t-J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw2jE0gY7uWJbKw-9Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyUca_DdJ8NSjUhREl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyErOyGQnrNYU_t-zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxZlKi1BU6FraiVs754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"} ]