Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How is most of the AI's are wrong were you will get 5,000,000 dollars for doing …
ytc_UgwgfLodr…
G
I used to use ai to see what my ocs would look like,then i practiced drawing the…
ytc_Ugxe9xd1T…
G
relax, as a non art people, i prefer human art rather than ai art even i can ask…
ytc_Ugwul4n27…
G
wait a minute ! THAT'S REALLY A VERY VALID POINT. I found a new way to argue wit…
ytc_Ugy1H35_x…
G
Maybe coding Maybe i Extremely doubt it though forget hackers will be using AI …
ytc_Ugz9cSKL9…
G
They literally don't realize one cyber attack can completely destroy this AI bey…
ytc_UgyMvnizX…
G
It’s sad that we all do this because- again- we did not ask for it. We did not w…
ytc_UgxwxoR3I…
G
There really are sick people who want to see the worst case scenario with AI com…
ytc_UgwaR4hAm…
Comment
I worked with neural network AI for over 2 decades. This shit is overhyped and hit its next wall already with its bruteforce scaling. Bottlenecks are: lack of new labeled data at scale; lack of capacity to compute next level of new data even if it appears somehow (e.g. through sensory realtime input); lack of energy at scale to power it all.
Jump from neural networks I integrated with 2 decades ago to current LLM is just due to maturing cloud computing + feeding all public Internet data collected and labeled by humans for several decades. We got nothing to perform the next level jump to gen AI, even if it's theoretically possible through currently tried linear growth hitting energy and infra constraints.
youtube
AI Jobs
2025-11-18T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxlJt_OBiB_DEz6ull4AaABAg.APfzSm5ps4TAPgSW5dR4q_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzCgKJ2xoReaB9MIkd4AaABAg.APfzIeIClhPAPg011738sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgwwPos9M0slXLT_nFh4AaABAg.APfyyISU2sEAPgGjl4u8KV","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugztite_u8gThriWXfN4AaABAg.APfyizjE00DAPg66b9c7J4","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugztite_u8gThriWXfN4AaABAg.APfyizjE00DAPgHmTA5dZ5","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugxji-p982_i-O6TBoN4AaABAg.APfyfMHYDncAPg3qYTvyx3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyObO5wv6zzcwIrm6x4AaABAg.APfyTFnjztZAPfzXhnjvir","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxIzKo9p-ypMPf5OEF4AaABAg.APfyMRyiYkPAPgD6B7avR5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxIzKo9p-ypMPf5OEF4AaABAg.APfyMRyiYkPAPgF1GFULN1","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxIzKo9p-ypMPf5OEF4AaABAg.APfyMRyiYkPAPgKwhC_ZK9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]