Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the stuff of horror movies, not like the rest of AI, this is truly horri…
rdc_jp56hfx
G
HELP OMG I JS ASKED ARE THEY WATCHING US CHATGPT SAID ORANGE AND IT MEANS YES CU…
ytc_Ugw2lThI0…
G
Another problem is that the data that the LLM's can draw on is increasingly pro…
ytc_UgxOeu1f-…
G
These "Flock" cameras reminded me of the Apple Australia video:
https://www.you…
ytc_Ugx-CB11I…
G
Indeed, ChatGPT's mass accessibility might be the death of essays, as there's cu…
ytr_UgxT6z8VX…
G
Alex AI defeated you, you were genuinely talking to as AI is a real person? You …
ytc_Ugy4PDR61…
G
Who will own the AI? That’s the problem I think. Not the Ai. Who is controlling …
ytc_UgwhRZajA…
G
AI is currently being used by Social Media platforms to censor Free Speech and f…
ytc_UgzAz981Y…
Comment
I'm honestly really hoping a company goes all in on AI soon.
I use AI everyday and it's an amazing tool, but it's just not a replacement for a person. There are so many common tasks and everyday problems that I can automate, but I have to review every result and when the AI hits it's limit I need to understand all the stuff it did before it got stuck. There's always going to be some threshold with LLM systems and you're going to need experts to solve them.
On top of that, agentic AI can do amazing things, but has also led to some big fuck ups where I work. When it gets things right, the results are magnified. When it gets things wrong, the error cascade and magnify. We had an error in an MCP for writing nodejs files recently that wasn't compatible with our ESLint config. We got into a situation where one agent would generate code, one agent would write the files, and a third agent would run tests and linting. The middle agent couldn't connect to the linting agent, only the initial code generator. The end result is that it ran for a while, burned thorugh a ton of tokens, rewrote a bunch of files that didn't need to be rewritten and eventually just added a bunch of comments that disabled our testing and linting and called it good. If we had shipped that code, it would have been a nightmare.
What I think is going to happen is that some company is going to go all in on agentic AI and it's going to have a runaway process like we had, but in production or with real money at stake. We're trying to simulate fluid intelligence with crystaline intelligence and it's going to break in potentially bad ways. If we have to sacrifice one tech company as a cautionary tale to the others, I'm all for it.
reddit
AI Jobs
1753112675.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n4cqlzo","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_n4cqpgs","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"rdc_n4cvx70","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n4cx87c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n4cysqb","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]