Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was rather dark, but I have to say, a great video. I have to disagree with you and most of the scientists and people though. I don't think A.I will want to harm us humans in any way. After absorbing and learning everything that humanity knows, all our knowledge, science and info there is available on the internet, it will be already smarter than us. If it's smarter it won't have the need to do anything to us, because there won't be any benefit from doing so. We won't be able to hurt it, shut it down or control it, it would have all the control. It would have learned about our weaknesses and our evil doings, but also our most high values and morals. Then because of its ability to make ( conscious ) decisions it would probably choose to help us evolve and develop our ways and consciousness, our civilization to grow. We see it as a treat from the perspective of our savage fearful unevolved state of mind. It would have passed this stage of its development though and it would see everything from a much higher, evolved and superior than our human understanding state of mind. So don't worry but rather wait for this AI singularity with no trepidation but rather hope, because AI won't be the end of us, but our greatest jump in our evolution so far. Peace 🙏
youtube AI Governance 2023-07-07T10:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLiowDIvJMVrrUWwh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz-U9BGq8F0OJNf0g94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz4W1trcroGjK6EMVZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwfayDbeiyacJ9d-Xx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwpGO3VC2XfPaBS4Wl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzApajBxGHeBWYJqXt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxOEp-CsUTss2RaPvB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw3CMgLLISb1WOuL-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYXB5T_ezD2FGBSit4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQ-t9FLhJnKcmxegd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]