Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Still 20+ years away from AI and robots taking over. It will start to take of i…
ytc_UgwGCjVsA…
G
Okay, so I am a real artist and I take a lot of time drawing and editing mistake…
ytc_Ugxq79aS6…
G
Pathetic. Joe random makes a picture using AI and everyone is up in arms over it…
ytc_Ugw4FMtmT…
G
Time to go self-employed, or start your own business. Then, you can decide wheth…
ytc_Ugy9VZFxo…
G
AI is definitely not green, not to mention gen ai creates nothing of real value,…
ytr_UgzinjDfc…
G
Looking through human artists' work It's like getting gold from a river. I'll be…
ytc_Ugx3fUWMW…
G
im really thinking that ai is somthing created to down the salary of programmers…
ytc_Ugx4EpxI7…
G
They talk about human driving like its the pinnacle of driving...human drivers a…
ytc_UgzxLfbZW…
Comment
No! LLMs don't "consider" anything.
They take the prompt you give them, then using a massive relational database, determine the most likely words that would respond to such a prompt.
They don't "know" what they're taking about. They cannot think. They don't have any idea what any of the things they're saying actually mean. They're just giving you the most probable response based on the weights of all the input data.
Like, when you ask these tools to draw a cat, they don't know what a cat actually is. They don't know what is eyes, noise, ears, etc. are. They just know that, given your prompt, each pixel in the response is most likely going to look a certain way, and that's it.
Stop anthropomorphizing these things.
reddit
AI Jobs
1772204394.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7ohrwj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o7ojuwr","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"rdc_o7ojynh","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o7qliov","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_o7pl6a6","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}
]