Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Exactly!! Not to mention generative AI uses real artists work to train itself, …
ytr_UgxGGDAFC…
G
So ur telling me, AI learns art the same way humans do? As in, humans also inges…
ytc_UgxMU_RXl…
G
AI art should only compete with AI art, simple. But it is a creative process.
…
ytc_Ugxt8njtO…
G
Women gonna have to step their game up so much. Getting replaced in real time.…
ytc_UgzH_aaJA…
G
Your comment reveals how unaware people are of whats coming... Amish? You think …
ytr_UgwXb9fcx…
G
The ai nerds that are so upset by artists defending themselves are genuinely jus…
ytc_Ugygt3c2m…
G
They say customer serverchat bots take jobs. As a consumer, Uber driver and app…
ytc_UgxQtAp7g…
G
this entire concept is only substantiated on the idea that LLMs will advance exp…
ytc_UgyD2QqEg…
Comment
This is the part they need to work on most. I don't care if new models are smarter; they're already so smart that as a layman I'm not limited by how "smart" it is. I just want them to reduce hallucinations.
I've been using Gemini to plan my weekly running routine to get faster; I fed it some data after a run today and it basically said "That's a good job considering you were on tired legs after your run yesterday" and I had to remind it "Huh? I haven't run since Saturday" at which point it admitted it was thinking "yesterday" was actually last Wednesday. I've actually had *two* runs since then (not including today) and I fed it the data on both, so it was aware of them.
It makes me wonder how many times it's hallucinating things that I'm not catching. That said, it's not like if I hired a human coach it couldn't have made a similar mistake, so I'm not *super* concerned, but it is something I wish they would focus on.
reddit
AI Moral Status
1765319692.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]