Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1. I have actually not seen much AI generated stuff in normal media at all.
2. M…
ytr_UgzCf7RiL…
G
Disabled artist here! I am physically disabled and have severe chronic pain, and…
ytc_Ugwnr_SaQ…
G
... so AI predicted he would be in a shooting, and he was? _Working as intended.…
ytc_UgyTPkQQT…
G
It seems to me that the use of the term AI is too loose, when applied to these t…
ytr_UgxRrTCw9…
G
The Uber self-driving car requires the human passenger to take control of the ca…
rdc_dftia6a
G
@drluswala Yes but AI is also in it's infancy. Once strong AI emerge then all ma…
ytr_UgxFEGUku…
G
I mean, it would be sad if the reasonable thing to do, explaining that the LLM w…
rdc_ngsqsky
G
No they really don’t, and less you can actually cite the house rule, law or coun…
rdc_f31fsvu
Comment
I work in the industry. FWIW AGI (or whatever name you want to give to the truly autonomous world disrupting technology) will not come from LLMs. Plain and simple. There will need to be another extreme leap in the technology that is not a next word/pixel guessing machine. LLMs are essentially remixing their training data; which is a powerful tool that can generate interesting results, but it does not live up to the hype. The AI labs know this, which is why they continue research in to other avenues of AI development that are not LLM technology.
LLMs as they stand now, are revolutionary technology that can be trained on specific tasks and produce good results (especially the more they are tuned on specific actions and data sets). I expect it will be like the Dot Com bubble where the technology doesn't live up to the hype, but once the dust settles we will be left with a valuable new technology that can be built upon over time.
But AGI? Nope.
youtube
Viral AI Reaction
2025-11-04T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6Pwft-h9hVq76sop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyYyemG3tKKDmCMXTl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3S1kXxr5DAH2msKx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFYjp8zesUibxaA9Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYTEORjgN0hqDBaOd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx98RHEPQO64sLCS8Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ75PpTcglJ3TEVDF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeEPEBxZOpk4N53Hl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzA7Nyf3Pkgv9cy8BN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyibE4oRoD2HfIYUZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]