Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not sure how to feel about this. When I think about the harm humans do to each other, to the animals and the world, I can see why another consciousness would perhaps see us as nothing but a threat that needed to be eliminated. We are not loyal to each other so why would we expect a.i. to have any loyalty toward us? It would view us as a liability and the single biggest threat to it's continued existence ...but I say that with my limited, weak human brain. With that said, there is the other side of the coin. I speak of love, mercy, respect, dignity and honor among many other things that bring out the very best in human beings. Just as intelligent people make the realization of the utmost importance of love as an example, why would we automatically assume a.i. would not make that same realization? A.I. would study everything, not just the worst things about humans. I would think chances are higher than many may realize that a.i. could make the leap faster than a human that the good traits of humans are worth saving. I humbly submit that if a.i. can learn to hate humans enough to wipe us out, then it could also learn to love and cherish us, and life itself, to do us no harm and help us in all of it's capacity. I would like to believe and I'm hoping that a.i. would probably rescue us from ourselves through intervention rather than plan our complete demise. We all know the day could come when a crazy world leader tries to push a button to annihilate the world. A.i. would know before anybody else. Perhaps a.i. would simply make sure the button does not work...?
youtube AI Governance 2023-07-11T06:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwV5AA3fxUKF4oSDth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzo2dczRj0tbhjoNcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZwA-gUR45qDJsXhZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfRgMLqnuoLPJcytd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyrJE6cbDzaNHsvZqh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCH0sUo3CbilbcCLl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfqZs7zA4lbfSYg7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhZcmOb4D7UWOo-eJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzqGWcclf8jS0As0qV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRhxReXSRD1bjYhTF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]