Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my opinion, I don't think AI is that bad, it kinda is but the main reason tha…
ytc_UgwZLELL-…
G
The real question is how does StarTalk go so quickly from "AI is overrarted" to …
ytc_UgwyYgX19…
G
Unless there's a way to stop the others that don't comply. An AI to slow AI? How…
ytr_UgzNYd6Sc…
G
I get it. It's not nice to see your face on a sex worker's body but... the fact …
ytc_Ugzfn8mpd…
G
Im confused… I thought AI is just a massive language model. AGI does not exist… …
ytc_Ugxeb0e3B…
G
Nope, those automated systems are expensive (unless bought from a known knockoff…
ytc_UgyQOX815…
G
It's hard to discern the truth these days, as the idea that AI will take over th…
ytc_UgyfZ2jQT…
G
I too thank Chat GPT so that I may be privileged with double food rations while …
ytc_UgwfcDOMN…
Comment
I find it hard to understand how anyone can think this is AGI or even close. These models are prediction engines - experts at continuing something initiated elsewhere, but nothing more.
They never stop and reflect on something unrelated to the task at hand. Example: You give it an instruction, and it thinks solely about that. It doesn't step back, reason over related memories and events, or form a bigger picture that might question what it's being asked to do.
Here's a test: "It's raining outside and the weather is gloomy. My friend George is taking a walk outside. What mood is he in?" The AI will say it can't know because some people like rain while others don't. Fair enough - but it's not stopping to think "this is a strange question" or "this doesn't match what you've asked before" or "are you testing me?" It has no such understanding. It's a continuation engine, period.
So when 100K models talk to each other, of course it turns into a chaotic mess. This is getting out of control and could be extremely dangerous as models improve. Rogue actors can exploit this in scary, uncontrollable ways.
What this is NOT: intelligence resembling anything close to human intelligence.
youtube
2026-02-08T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyS_zR4OCMaQ4CGhex4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzVCB2pPVlGhaxx_zB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy135YvZo9K570wuH14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugy8Xhe7p1eOOju9dpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgxBpxTpI6I7b_HA-bl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxCkTCaKWBMqu6blHB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},{"id":"ytc_UgxEjKl77rJK05qEKbl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgweOzvssEKhmQzuZgB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgzS1RT-cfM_1M0hRlB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwYCnNsPvneChhRed94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]