Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ThereIsAlwaysaWay2 I'm subbing to ChatGPT, Grok, and Claude. I like them for d…
ytr_UgwaVb8ne…
G
Nothing scary. Many have know for years that 1 child policy is correct, and AI s…
ytc_Ugy3CMWnf…
G
@queenwaffles7681 im glad you're empathetic towards people who are intimidated b…
ytr_Ugw9n2SXp…
G
Yess use ai to lighten the load for animators!!! Use up all our water!!!! Awesom…
ytc_Ugx3owN1N…
G
Hear me out
This is amazing but someone should really do an "AI trickery" event …
ytc_UgykFur46…
G
You could say there's a level of intentionality in the same sense that there's a…
rdc_j8c2npe
G
Broadly calling use of LLMs cheating is a bad take. Incorrect use of any tool is…
ytc_UgzrFiN7s…
G
Guy's, I've noticed it around year 2016, at first I thought know way, it's my im…
ytc_UgxLzA2c8…
Comment
I do research on these systems, specifically the part that enabled these new chatbots. I think 30-50 years is still a good estimate. The whole field is sort of overtaken by AI hype, but because of the insane funding in the space it’s hard to tell who is making honest assessments and who is overhyping their tech for funding purposes. OpenAI is a big culprit here, their tech is incredible, but its also an incremental improvement over models we had last year and very far from the general intelligence they claim to be building.
AI is now able to automate simple cognitive tasks in the same way that robots started to automate simple manual tasks years ago. The limiting factor is that these systems, not just the new models but all of deep learning, fundamentally learns in a more brittle way than humans. They are exceptional at memorizing and interpolating between information, and decent at learning true general patterns, but they make mistakes far more often than humans. Think of how many jobs you would be fired from if you made 10 times as many mistakes. That’s at best intern-level work, if that. And these models aren’t agents, they don’t act on their own, so its like an intern you have to babysit and tell exactly what to do. There’s unlikely to be any exponential leaps on those issues in the near future, although its obviously a big area of research.
There are many tasks where memorizing information and generating solutions are all that you need. Things like formatting data, writing short code scripts, or editing essays. Specifically tasks where the hard part is manipulating text, they will be exceptionally good at. But while they can occasionally do non-text based tasks if you format them as text (navigating a 2D ascii map or giving you a plan to develop a new software project), they have a much higher error rate than humans. You may think that you could just train them on more text for longer to reduce that error, but there’s only so much that you can learn from seco
reddit
AI Governance
1682955144.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jifx5f7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_jifdth1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_jifzy5l","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_jifhghe","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_jiflfd0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]