Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is no such thing as general AI and there is no sign it will magically appe…
ytc_UgzpvfRgc…
G
every body lets all work together to tell the AI artists " WHOMP WHOMP "…
ytc_Ugx8bWsvu…
G
So negative, I think it's more likely to make life far better. Who believes huma…
ytc_Ugz4AKRuT…
G
To be fair, GPT-4o also had that same kind of lifeless vibe during its first wee…
rdc_n7k11w0
G
AI should take the world we need to progress not to go to the past…
ytc_Ugyr65xgO…
G
So not only you are a rap1st, you are also too lazy to actually look into wat y…
ytc_Ugz8qnf2o…
G
It’s a reference tool …. Their effort wasn’t worthless ! It was added to the col…
ytc_UgwM9WTdu…
G
This man is an ass. I have two kids 3 and 13. Tell them to write 1 to 5. Both ki…
ytc_UgzQmqvwG…
Comment
Chatgpt isn't smart, Chatgpt/LLM's are nonsense-by-default, useful output is a SIDE EFFECT.
"may make mistakes" in the disclaimers is because MISTAKES ARE THE MAIN FEATURE.
You cannot get determinism from a probabilistic system.
It doesn't even really do the all the "smart" things they hype it to do.
"smart" isn't even a good statement because it's still baking in the rhetorical/sentimental idea that the only tool we have or should use is by: comparing math to a humans in order to replace humans.
Even devolving into analogies of training LLMs being like "growing an organism", plants a very wrong insidious idea.
And that's the dumb af rhetoric game being played whose main goal has become to boost overvalued stocks while the floor falls for a long line of reasons not just "AI".
Useful OUTPUT is a side-effect, output != smart,
Chatgpt/LLM's are probabilistic nonsense-by-default.
nonsense is the core feature most everything else is illusory we have to force to happen.
nonsense is NOT the side effect, cohesive useful output is a side effect.
The biggest lie is "AI" (probabilistic LLMs) have understanding, or are "reasoning" or the bevy of other anthropomorphic sentiments, to hype services by slapping words in the UI and the marketing; and yes even stemming from researchers because they need marketable paper titles to get funding.
It's perverse how bad our language is in helping us mislead ourselves.
The illusion of useful outputs is because a ton of money and human time is burned to minimize the nonsense default.
Probabilistic and determinism are different words for a reason.
Saying an LLM is "smart" because it randomly pulls from a corpus of human knowledge is like saying a pile of shit is delicious because it's carbon atoms shaped & textured like a cake.
youtube
AI Moral Status
2025-10-30T21:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]