Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know this comment is going to sound majorly Luddite-ish, but I am unhappy that some tech bros got to decide on behalf of humanity that now is the right time for AI and we have no choice otherwise. Like, of course it was being worked on, but these dudes decided that they needed to release this level of AI right now, ethics or any kind of greater discussion be damned. Who cares if the majority of people alive right now would have been against it, move fast and break things amirite?! (not saying they are against it, just that if you ever listen to what these men, and it is exclusively men, say, then it is clear that they don't give a flying fuck regardless.) I know that humanity is never ready for technological revolutions, but this definitely seems to have more implications about our future than, say, radios and televisions. This is the internet but on steroids. I don't like that we are having these discussions when the cat is already half out the bag, when the genie is not fully out of the bottle but certainly unputbackable. I don't like that amongst all the other issues in the world, I now have to worry about whether artificial intelligence is going to be used to good or evil within my lifetime. I was worrying about that anyway because I have severe anxiety, but at least before I could push it to the back of my head. Now we all need to be AI ethics researchers. I don't know if the joy AI being stupid outweighs the existential dread it brings me. Thanks, tech bros.
youtube AI Moral Status 2023-08-22T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzfvlXPX1-krKz4_h54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw_GmpjYU3tiEkZYAV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugx3Sv7YzQcpkxYLtCt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzWoKGgMwegnVFY6Fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgxuyGHRzPCzFYZB1td4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyDhzQ85i6aTQ5Cmjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzA7YivRz9cystJLl54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugw1nQdPaXK-8x-MVqd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwiETjJU1w0t9M9g3F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzgnQUhuGDO3nFh_vF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]