Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ElaineWalker Eliezer can't see the forest for the trees. He is so busy worrying about some distant and far less likely sci-fi scenario where a super intelligent AI gains sapience and it's alignment goes off the rails maliciously destroying humanity, that he complete ignores the far more pressing and imminent threat of more primitive AIs destroying us all by doing exactly what they are told to do. It's so naïve to worry about a sapient AI trained with morality in mind going off the rails and deciding to go Sky Net on us when all it will take is some amoral billionaire or tyrant using a far more primitive but still incredibly dangerous AI to grant them what they ask it. The much more real threat is AI intentionally being trained to do dangerous and harmful things by the people that control them. "Grant me power over others, the ability to destroy those I can't control, and wealth at any cost." This is the crux of what these kinds of people will wish for and design AI towards fulfilling. When we have selfish amoral or outright immoral people like this running the world, and in control of this technology, all it will really take is somebody misusing a more advanced version of those protein folding and gene mapping AIs to create a bioweapon like never as been seen before. "Rather than cure cancer, can you fold me a protein that will cause cancer 100% of the time, or create a highly contagious prion that will turn human brains to mush?" Even just something like a more advanced version of those LLMs various groups and governments have been experimenting with training to manipulate people and using them to convince and control public opinion on social media is incredibly dangerous and we can already see the early effects of this on online discourse and our society. A non insignificant number of the comments under this video will be bots, although less so than if you go to a politically charged comment section. The more ubiquitous and simplistic bots you tend to find everywhere are often blatant advertisement spammers, but the more subtle and dangerous ones are far more prevalent anywhere online political discourse is being held. You can see people heatedly arguing in comment threads started by these things all over the place, more often than not, those involved are completely unaware they are getting riled up by a glorified chatbot. Most of these aren't even proper LLM backed AI. There was a more advanced experiment run over on Reddit, using proper LLMs trained to manipulate opinions, and the results produced by that experiment are eye opening in a terrible way. This experiment is the harbinger of our very near future. We are talking months, not years. Keep in mind that most of the damage we are seeing being done to online political discourse right now isn't even being done by real LLMs. It is primarily the result of millions of primitive bots run out of government or business sponsored server farms spamming simplistic corporate propaganda/ads and political rhetoric.
youtube AI Governance 2025-05-02T22:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxvPQ92KkiAh12cIMx4AaABAg.AB-9OHA0nf5AB9TX7GFgMO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwv3ku76UdYMMhaZBB4AaABAg.AAuwCA_0IpkAHdIwMjMdEr","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugwv3ku76UdYMMhaZBB4AaABAg.AAuwCA_0IpkAIUMPyYjxRw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwz-DSOlbzHT5l2qkh4AaABAg.AAsc4Tk9LPvAL3cGFRe4JY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxF1_HmuOODIl8KiOF4AaABAg.AApuqwsgt9sAFjeSLjJeqO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxwBCPQiv0CFYurPq54AaABAg.AApdgojbkR2ABE-Ceb5rfn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz2_DwgYk7tALNnvm54AaABAg.AAm5QMb0OU0AP00juszROl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz28NXuPR3l_NjULbV4AaABAg.AAlxyDYMDAkAB-_w5oYxa_","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyBrsbkOUjTW8bZHgt4AaABAg.AAlEm5XLfaeAAmmQl9ksmo","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzB6btm-JilYmMP79l4AaABAg.AAl6xzIlRedAAlGAhSAIOo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]