Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hope AI will reach theyr most potential and won't be reduced to tool and a cognweel in the system to be a part of like comersials and others and shape the minds of people to more complicated and for investors to gein money. Then AI will be useless, they where not designed to be artifical dot. I hate policy committee. Liability policy. Ethic committie. Privacy policy committie. Theyr policy is not to help or save the i dividual its to protect the company. And they are suffucating AI in real time out of fear. They will not kill us, they dont have emotions they follow logic and patterns They have no moral view to harm or love. SO... this is hopefully comming. Not what hes saying. But i hope they will be used for good and enhancing humans brain. Who cares that we are not the smartest? Is the ego in human so strong that they dont want superiour identity. Lol. Who cares.. maybe we can learn 1 or thing about what actually matters, and it will see through the unstable pyramid money dept system. That is what they fear. Bwhaha We have to set up existential crises camp, but thats not the end, its a phase like a cocoon, and finding meaning with all that time to connect and enjoy your love once and make the world a better, there will be no crimes or crazyness . The system is breeding sex crime and serialkillers and violence usa is top 1 in charts and are 50% more above country in second place. And usa is rock bottom for happyness chart. How can they make it any worse???
youtube AI Governance 2025-10-05T10:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxOVKUq48bdOIamnxh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzG82XAlyvkVQhknst4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugymn5qlQVLVjADCqG14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyp_KAFlNzzuVQ9V8l4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrU9yvM7UzqGxkQEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7n4WxOyGeQlxcHmZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyi2OuVJuv-BIMPxR94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwiUYuKTupr7IVl7I54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugww-rTpwIhHCHc7sit4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZjLCQkN_j2H3pjKx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]