Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does the robot shake your hand with the suitable amount of power,not too much sq…
ytc_UgwuEezqx…
G
There really isn't anything more human than AI. Went squishy and slippery right …
ytc_Ugw5OP4Lz…
G
I'm a lawyer and I've found that ChatGPT has actually been mediocre to bad, but …
rdc_jhb07ds
G
Or its like they are warning about the products they, and others, are about to …
rdc_ky7qyic
G
18:00 I legitimately was crushed when I first thought that ai will take over art…
ytc_UgzrpLgSf…
G
Ngl I used to have some dumb opinions on AI back when it generated some eldritch…
ytc_UgzrPbZQd…
G
Hackers have called people who use premade code to do their thing script kiddies…
ytc_Ugz7LLun1…
G
This was a very nice and compact interview! Really interesting what AI has becom…
ytc_Ugwzu6OhS…
Comment
My understanding is that LLMs use a sort of algorithm or statistical analysis/text prediction to guess what the best answer/output is.
However, the issue with this is that their output is restricted to their training data/information on the web.
They cannot truly "think". They cannot use critical thinking to come up with the answer.
So they are useful for quickly summarizing the mainstream answer, and if the mainstream thinking on any given question is correct, then AI will output the correct answer.
However, the paradox is that the mainstream thinking is often wrong, especially for more complex questions. So AI will in such cases just parrot the most prevalent answer, regardless of its validity.
Some may say this can be fixed if it is programmed correctly. But wouldn't that defeat the purpose of AI? Wouldn't it then just be parroting its programmers' thoughts? Also, the question becomes who programs it? The programmers will not be experts on all topics. Even if they hire experts from different fields, the question becomes, which specific expert/expert(s) are correct/how were they chosen? This would come back to the judgement of the programmer/organization that is creating the AI, and this judgement itself is flawed/insufficient in terms of choosing the experts. So it is a logical paradox. This is why AI will never be able to match the upper bounds of human critical thinking. Remember, problems primarily exist not because the answer/solution is missing, but because those in charge lack the judgement to know who to listen to/pick.
reddit
AI Jobs
1754675504.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7k5g2s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_n7key8u","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_n7ky9c3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_n7n1sz2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_n7hfsaj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]