Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Just give it 2 more months!! You'll see how useful it is if you just wait for i…
ytc_UgyHbc80Z…
G
Super Rich people believe they are above any risk in society, as soon as they ha…
ytc_Ugw9sjI3u…
G
I kinda wanna get back into doing booths at Cons but these AI hacks would piss m…
ytc_UgzFzY7or…
G
If parents were locked in with their kids and everyone sent well behaved, curiou…
ytc_Ugy_5VfmE…
G
I also pay for ChatGPT 4.o. This is exactly what it’s like to interact with it. …
ytr_UgyALY41u…
G
Often, the CEO‘s of all the big tech companies, that are involved, in this whole…
ytc_Ugxf3-_pQ…
G
No, it means nothing other than OpenAI had a deadline to release something and m…
rdc_n7pgvuk
G
So, I just released an entire Afrobeat project using vocals generated from Suno …
ytc_Ugxiymvnr…
Comment
Large Language Models are LANGUAGE MODELS. In order for them to give such warnings, they have to be instructed to do so. LLMs are not intelligent. They generate conversation. That's it. They use a context to help keep the conversation viable, but information they deliver is a result of a very advanced markov chain generator. Until these language models are attached to knowledge models, graphs, database, etc, and not just filling a context window with more language from other sources, they should never be trusted to give accurate information, ESPECIALLY when it comes to health and medical advice. They're interesting toys, and that's it.
youtube
AI Harm Incident
2025-11-25T01:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugwzw_o1EuSBcQhbjKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjhPpqvHJApiD_nRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqMPU0ljM3OFnTl0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNJmZku41M303ixkd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHp5dE0Dw1E2sJ57h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzOcsOy57W31pIrz-14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUOdrlwIVhgL5Qe1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZyA0jFzT4AndYWp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK9PVGt-MmlhPV9sV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzp0ac_bhsbQXJLPV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})