Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our grid can not handle AI, the more it becomes intertwined with our day to day …
ytc_Ugw3n6b_c…
G
I genuinely can’t imagine a real artist coming to the defense of AI art used in …
ytc_UgwQCIK0p…
G
thats his opinion, but Geoffrey Hinton who is an expert in ai and helped invent …
ytc_UgxUskOaN…
G
Listen, immagine current world governance systems as an already established AI s…
ytc_UgzCnvjN3…
G
I would choose the Tesla 100% of the time. The Waymo clearly has issue’s and the…
ytc_Ugx7BtnoH…
G
That's the scary part. Oppenheimer and the other scientists involved in the Manh…
ytr_Ugwh3jLhm…
G
Nothing to do with AI, tech productivity has diminished drastically in the last …
ytc_UgzgdQQyK…
G
My theory on the 9.9 < 9.11 situation is the training data for an LLM is larg…
rdc_mzyafjd
Comment
Don't worry, in 1 year, if you are a programmer, you will be fixing bugs on difficult to maintain code generated with AI that no one wants to touch.
An AI built on probabilistic methods cannot develop true super-intelligence—or even genuine intelligence. At best, it can create the illusion of intelligence. A fundamental indicator of real intelligence is the capacity to take responsibility for one’s actions, and a system whose decisions arise from statistical “dice-rolling” cannot possess such responsibility.
Mark, Elon, Sam are f**ing with us.
Don't be a baby, specialize yourself an be better
youtube
AI Jobs
2025-12-08T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwputnkw5fZ7q1Jiq94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLLlTP2bDalKAmHKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5JAcJP6GGbIDRwGN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0jWqBrX4xezIn3Il4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQkQEWh1I4E3BPdH54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTG1i93_saRXycntd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFbmHqLIQ4oJ3F02F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw3ibEO9Lbldl78hKl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzc2hFte0Brrzkd6tJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5TYme0cn4HPnYoix4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]