Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In January 2026, AI systems at CERN's Large Hadron Collider caught something tha…
rdc_oe28otb
G
Allergic Wolf and Tulammo Steel Vape Heavy Wolf Core huh! Small T AI wan and Sma…
ytc_UgzknfNjI…
G
Hello Dr.Cellini - This is a very informative. I am working on a report on AI in…
ytc_Ugwmo-pFL…
G
It's sad that this is necessary.
It's even more sad that you can't be sure the m…
ytc_Ugyywv1ha…
G
Yes, the FSD system, which is replaces the old Tesla autopilot, has that advance…
ytr_Ugz6Q27Yc…
G
Finger prints are way more accurate than face recognition, jails are filled with…
ytc_UgwSBSswy…
G
I remember telling my repub coworkers that if trump did all the stuff he’s doing…
rdc_oi3vbw5
G
Question: do you *enjoy* creating your own art? Ignoring your career for a momen…
ytr_UgzZzrJ_r…
Comment
Current LLMs have an intrinsic limitation that is language itself. Language isn't descriptive enough to give LLMs a working world model. For example, they can produce an endless stream of recipes, or write an essay on how the benefits of it... but they'll NEVER know what chicken really tastes like. Same goes for any concept that's inherently larger than pure text. An LLM doesn't know what the color red is, it can't understand music, smells, etc.
Even with models that have image recognition capabilities, behind the scenes they're breaking down the images into a textual description. There's no symbolic representation going on, no consistent internal world model like we have, where you can take abstract concepts and use them to generate a self-consistent representation of the real world inside your head.
IMHO AGI will need a major leap both in terms of the way neural networks operate, and are trained, and in terms of hardware.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs2lO24uvmjzosP2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwp26tCeGfwDsz8TIN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwza1mVB8TWkmA04Dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwGd1XcvPV3vsN6Am54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxqWyhZtzQCHEiIUT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy84uscfII_cT2162p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwuAol13egUtpLs_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwxqDERJo-sXunM51J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8a2jv9gsY1SgMMHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgziKVuSNTMgwTepucZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}
]