Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Self driving cars are the worst invention ever created. The computer age has got…
ytc_UgwOIjISM…
G
People, sorry for being so brutal about what I am going to say! Dr. Yampolskiy i…
ytc_UgxJsZO3f…
G
They’re not replacing tech workers with AI, they’re outsourcing their jobs overs…
ytc_UgwC7lcLX…
G
I appreciate your honesty! If you have specific feedback about what didn't reson…
ytr_UgxR_oG6j…
G
So the AI act like selfish humans. I wonder why the hell they do that......... S…
ytc_UgxfUObXz…
G
My ex used to say he was an artist. He used ai. He is an idiot…
ytc_UgzjSaJxM…
G
Why is man obsessed with inventing something to replace man?? The dumbest invent…
ytc_Ugy9LTqxJ…
G
Mujhe toh pehle se hi AI se Dar tha jab Maine bahut sare movies like transformer…
ytc_UgxE6hg8b…
Comment
I theorize that as long as there is more than 1 LLM "AI," there will not be an LLM-based superintelligence.
The reason for this is that LLMs put out an absolute ton of absolute garbage data that will one way or another get fed into the other LLM(s), causing that LLM's data to be more worthless, whose data will be fed back to the other LLM... it'll be a loop of spam and white noise garbage at one another that will just get more and more senseless over time.
Conversely a single LLM could theoretically consistently be updated with actual real human data, even though some human data is also garbage and spam. Perhaps a sufficiently powerful LLM could recognize the trolls, spam, and lies by pooling together an entire world of human-sourced information and data. I doubt that even a single LLM would have a chance to become superintelligent but considering how there's already multiple companies competing with one another to make the "best" LLM, there's almost no reason to really actually consider the scenario of only one single LLM existing.
youtube
AI Moral Status
2025-10-31T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFWjPxkVWOsujH9ll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT_V6rjZMblmZFKhx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwPqhU1Y94q7MlruVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhmHIKsyvU8aT63894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyagZ-OLXQ1iiUpu-d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"horror"},
{"id":"ytc_UgzfF7u5seJ-9W784G94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc8cwVmqY2yStK5qp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzm40otCkmJW9KHb0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBGAG-3NHjz1r77Pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww88gxC1xcl4ZN7Cp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]