Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fascinating, if highly depressing discussion. I'd like to think humans will get together to solve the AI safety issue, but humans appear to be more divided now than they have done certainly in my lifetime, if not for many more years. Much as with climate change, we're now at a stage where humans won't even begin to act until it's far too late. Money and power comes first, and that will lead to a lot of death and destruction over the coming decades. Perhaps we really are in the final stages of human existence on earth, and maybe we will deserve our extinction. On the other hand, 80+ years ago there was a race to develop the first weapons capable of destroying all life on the planet many times over, and we're still here. Only just - that threat is still looming, and with the way the world is right now, it wouldn't surprise me to see it happen. But we've held on for nearly a century with technology that would lead to mutually assured destruction, so it's possible that we might hold on for another century with incredibly dangerous AI, as long as it doesn't consider us a threat.
youtube AI Governance 2025-06-18T17:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxupByq1pJU7KQwkDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwXkNltFeimPmJMZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqBsep7so0OTfonf54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwfI1vnL8WvQKAFH-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgziafO7OcSIak-2lXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy6rwnX5lC-hwWsLEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwFuc4NmZft8ezsNg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlHbfZ_Kf0goYUIWp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxnLmiNozAbiWg42y94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiCIBcaMIxrDypsY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]