Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it's a load of 💩. You have to ask yourself why AI would even care? Answe…
ytc_Ugz1vZ476…
G
@ee-nk5sw bro.. if you want to write a book commission someone to do the art, do…
ytr_Ugxln8FHm…
G
If you don't think this world is corrupt enough well then I guess you didn't wat…
ytc_UgwJrAFRH…
G
Chatgpt doesn't have continuity of self, each chat session is essentially a echo…
ytc_Ugw8vCUKj…
G
What an idiot all animals are self aware even an ant will recognize another ant…
ytc_UgzJZpuc0…
G
I really hate the argument that AI is a tool. I mean, AI is indeed a tool that y…
ytc_UgyNDGnq3…
G
From my rather jaundiced perspective, the actual goal of a lot of AI initiatives…
rdc_n9hpov5
G
Ai because the doll couldn't be dead cause she is an electric and robots is elec…
ytc_UgwxYORLG…
Comment
What a bunch of horseshit. This current, 4th in recent decades, AI hype cycle is the most damaging hype cycle to AI research. LLMs are portrayed as something that will get humanity to AGI and replace most jobs, while the same LLMs cant answer basic logical questions. Forget about some BS benchmarks. Real people with real jobs all say the same thing. LLMs are crap. They are good in simplyfing easy, repeatable (and well-documented) tasks. They suck big time in real-life tasks. And that's exactly what you would expect from a STATISTICAL model that basically just combines words that fit together based on the prompt. This delulu guy in the interview (whoever TF he is) is wrong on many levels.
youtube
AI Jobs
2026-03-05T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwXloj4DiKiGvFPiYR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuwJrNqQF3nJUpr554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyniJKSXkla294VQ2h4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_W7TerPDDyeW8a9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsJZD1al_zw7sq7fZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhQesH3EAW0CKvh4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwePDAihiDMQDNMSVp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxnAcRosv_HqFy0ZuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKj_qlgBmvOlsaA9p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx7vfZy8sF1cAg-ZEl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]