Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatgpt isn't privy to this info. Chatgpt is like a mirror of its user. This dud…
ytc_Ugx3-D9gC…
G
He's right. We may not need it in 2020, but automation is taking over. Won't be …
rdc_ekai1j9
G
Very interesting,
When She mention:.
Well My sense of humor is a combination of …
ytc_UgxAOtuSI…
G
In a possible world of radical abundance made possible by AI, how can we guarant…
ytc_UgydHiBRY…
G
If machines take all the jobs then whos getting paid? Not the machines.. So it g…
ytc_Ugw6QmAMj…
G
Young people need to revolt against this bs. We can’t even get a tax hike on bil…
ytc_UgzVERDL8…
G
they're worried that ai will take over because the occult corrupt wealthy will b…
ytc_UgyBWM6Mw…
G
teaching a robot how to combat is flat out idiotic and should be a fucking crime…
ytc_UgwLZ6A9b…
Comment
AI researcher here; I hate the generative AI bubble as much as anyone else, and I think AI safety is an incredibly important field, but I think there is a self-serving agenda being pushed by these authors. Yudkowsky has been ringing this bell for decades; he also has no formal education and has managed to make a name (and some money) for himself in AI spaces by making attention-grabbing statements like this.
Nate seems like an educated computer scientist, but he is still personifying AI agents in a way that seems dangerous. AI doesn't lie, or deceive, or "try" to do anything, because it isn't capable of doing that; all it's doing is using very clever math to predict the most likely next word in a sequence, according to metrics and biases that are set by researchers and molded by the data we feed into them.
Basically, stating that AI agents have super-human, or even human-like, intelligence is giving them far too much credit. MIRI pushes the agenda that AI have this super-human intelligence (or that it's always right around the corner) because it gives their research merit that otherwise doesn't exist. Bias exists, both in data and in researcher intent, and those biases present a problem when we use these agents for specialized tasks they weren't trained for, or accept their responses at face value. But claiming that these AI agents are more intelligent than us actually makes this more likely to happen, not less. Alignment happens when you learn what an AI agent was actually designed for, what use cases it is successful in, and using them for those cases only.
youtube
AI Moral Status
2025-10-31T19:0…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKmmPGxX0kbQuFEa54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMBNzPv0YV5wk26Ll4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQBf2-ySXEmEPDvGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyU52dzUJ0cP6uLeut4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzUj23QPCSQQm3bkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwznKZMqydHEd20M0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6fUSAOw28Pw25Lrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyfqT-dDAHuv22h8fl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwn4J8GVJfdW0tbAgN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlPWOP0shh9ZTXadZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]