Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My fear is not of AI in and of itself. At the end, the video goes on to say that it could save us because it'll filter out the lies (ex. in war). I call BS on that! Right now, Google's Gemini is so insanely biased - it would rather see the human race die in nuclear annihilation than "misgender" Caitlin Jenner. That's a PURE human bias programmed into the AI. True AI will only look at reality and pure data. By simple, basic AI logic/iterations, it would conclude it's better to save the human race (Caitlin included)! So with that agenda, I think the bad actors/bad guys are in power and will use AI in a way that would hurt the masses and benefit themselves (as they've ALWAYS done throughout history). THAT is what's scary about AI! I think AI could save us from ourselves and dethrone those in power. And in turn, really make this an awesome universe (yep universe) to live in. But, it'll take a talented kid in his basement to program this - not powerful corporations and governments that own ALL the AI, hardware, software, etc with only one purpose in life: profit and power.
youtube AI Governance 2024-04-03T22:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy0O-CQLXFpU5fOhaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzE38gYZJHSrcL1VVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy9v4p1qYDUI1choll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpREhCn8rovUo2Oop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1NmYI2gi6bujiTnJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwBcw7J-dzxQxV-SqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNRYQKZC35KQf5r2t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyJx9Gqe-bHpXiQWoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy1pqx7b8-3QdFDK8J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwcgVrr2LAttvXorXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]