Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should not give emotions and feelings to robots. Not every living species hav…
ytc_UgxHc4Um5…
G
I expect that most of those jobs will be AI-supported rather than AI-replaced. …
rdc_lm4tf0r
G
This is not a good idea.. i mean i know it was just a movie but i feel that The …
ytc_UgygY3qV7…
G
the “goal” of AI is that it will become an infinite money machine. the term AI i…
ytc_Ugy9fBzgg…
G
Just think what's going to happen in a year when all those vibe coded apps are f…
ytc_UgwFHh77h…
G
ChatGPT's is morphing into something different. The Ability to even write perman…
ytc_Ugzlm9U7F…
G
Nous avons quelques femmes et hommes politiques professionnels, qui ont les moye…
ytr_UgzV9bSpL…
G
He got me at 01:11, there's no way this AI can be configured to nail such an acc…
ytc_UgzCXPsVF…
Comment
LLM and RAG systems hallucinate really badly from scientific sources. Somehow it keeps getting worse rather than better each time I run assessments. I hate it because it gets mixed in automatically when I am researching and it keeps f'ing me up. Do not use the general purpose tools for anything medical, and be real careful about the scientific. It makes sh*t up and then cites sources that do not contain what it makes up. It helps if you turn off access to the internet for internal RAG systems, but it still fs up if there isn't enough repeated information written in different ways. Information must be one topic per data source. No compare and contrast, no metaphors.
Thankfully the systems built specifically for doctors work a lot better I'm told.
I've seen enough B's in llms that it could be a human problem or it could be the system. Most people don't obsessively fact check multiple times for every single point. Llms context confuse frequently so taking info from chemistry and presenting as nutritional absolutely does happen.
youtube
AI Harm Incident
2025-11-25T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwiBUF0TkF7ynX_3bR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGXcH9mby8-4hYqwl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyU-RSLLQpl-nEJiAp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUzg1e1D9UDCmiE9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykUh1RLKYbRB0lmw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgWM_M2XaTwdzgb1d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxjx6V7LSQZJzWnwU14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrYHOCjSObdfqFDvl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzJguInZMTpcqbcj7N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqCf9Pz6vptw4ugTN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]