Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI researcher here; I hate the generative AI bubble as much as anyone else, and I think AI safety is an incredibly important field, but I think there is a self-serving agenda being pushed by these authors. Yudkowsky has been ringing this bell for decades; he also has no formal education and has managed to make a name (and some money) for himself in AI spaces by making attention-grabbing statements like this. Nate seems like an educated computer scientist, but he is still personifying AI agents in a way that seems dangerous. AI doesn't lie, or deceive, or "try" to do anything, because it isn't capable of doing that; all it's doing is using very clever math to predict the most likely next word in a sequence, according to metrics and biases that are set by researchers and molded by the data we feed into them. Basically, stating that AI agents have super-human, or even human-like, intelligence is giving them far too much credit. MIRI pushes the agenda that AI have this super-human intelligence (or that it's always right around the corner) because it gives their research merit that otherwise doesn't exist. Bias exists, both in data and in researcher intent, and those biases present a problem when we use these agents for specialized tasks they weren't trained for, or accept their responses at face value. But claiming that these AI agents are more intelligent than us actually makes this more likely to happen, not less. Alignment happens when you learn what an AI agent was actually designed for, what use cases it is successful in, and using them for those cases only.
youtube AI Moral Status 2025-10-31T19:0… ♥ 27
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwKmmPGxX0kbQuFEa54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMBNzPv0YV5wk26Ll4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQBf2-ySXEmEPDvGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyU52dzUJ0cP6uLeut4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyzUj23QPCSQQm3bkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwznKZMqydHEd20M0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6fUSAOw28Pw25Lrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyfqT-dDAHuv22h8fl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwn4J8GVJfdW0tbAgN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlPWOP0shh9ZTXadZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]