Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YALL HELP I CANT GET CHAT WITH CHARACTER AI- I TRY TO SEARCH UP A BOT AND IT JUS…
ytc_UgwVUh2yO…
G
The thing with AI is that it can't simply be commanded to do things, it isn't ab…
ytc_UggAVsZqH…
G
NGL Shadow people was scarier.
There's just no way ai would do this, ai is way …
ytc_UgyklT7qx…
G
I was scared as hell seeing one of these self-driving cyber trucks on the 91 fre…
ytc_UgxyCwz_F…
G
No, they can't, the ai just mimics stuff it sees on the internet/ stuff that it …
ytr_UgxCuVYuP…
G
I know AI artists aren't true artists but AI itself isn't bad, it's the people w…
ytc_Ugxt-Sltu…
G
Sub text “we’d like to drag you down to our level from a food safety and pharmac…
ytc_UgytBqTfE…
G
AI oligarchs claims boasting elimination of cancer & medical breakthroughs. How …
ytc_Ugzmp4Wkq…
Comment
>These models are capable of reasoning through unique cases at my work.
They really can't, they just apply patterns they were learned on. I work as a developer, half the time the code agent gets stuck in a loop doing and undoing things, or suggest improvements that would break it's previous work. It has no understanding of the larger codebase.
They are not the chatbots from late 2000 because these were just chained ifs and cases, but people who think LLMs actually reason have very low standards for what they consider reasoning.
A simple check, grab a chess board image, move around the pieces on the initial setup then ask the LLM to describe an opening. It will identify the layout is fucked up, and still move the knight as it was a bishop or a pawn. LLMs don't understand abstract concepts, cannot grasp a concept such as a piece in a game behaving differently to other, it only understand that in chess to win you usually have to start the game moving one thing into another predefined position.
reddit
AI Moral Status
1750956973.0
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mzwiulz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mzwvfsp","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mzwnged","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mzwwku6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mzxm3nm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]