Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are many environmental scientists who think we are on the brink of extinct…
ytc_UgyVYnylA…
G
at this point, 'AI' is just the new 'Gaming' title that every product gets label…
ytc_Ugwxr2Pn0…
G
It's actually just yet another boomer problem, the new generation will be totall…
rdc_gli7ajt
G
If they pay the AI human operators appropriately req. By Texas HB then hopefully…
ytc_UgzUKfHeg…
G
Yo! The creator of ChatGPT is also now the creator of one world ID. He created a…
ytc_UgzLbo-yj…
G
@reyofsoulthe only reason why im asking is because there’s already bunches of ge…
ytr_UgyK8fbc2…
G
AI "art" is just the latest scam, those who use it wanna make a quick buck and d…
ytc_Ugw0_yCfU…
G
Yes, AI that can hallucinate can be trusted. XD How did this even happen? There …
ytc_UgwhwfeTQ…
Comment
As an amatuer (but serious) AI safety researcher I can tell you AI is usually extremely accurate for well established facts with lots of discussion in academic writing but the accuracy steadily decreases with more obscure or debated things.
Additionally, modern Large Language Models are often highly optimized to mirror the user and be agreeable. This means that if at all possible to agree with the user, they will.
These factors makes them extremely unreliable for the use case presented in this video.
Going to AI to validate your own (non scientifically supported) opinion on a hyperspecific health choice is extremely dangerous.
It likely give you the validation that you wanted, but it won't be because you were right.
youtube
AI Harm Incident
2025-12-17T03:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwderVSkp_hvGdACJJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvCSjZAlgS0vOjYb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpN-uvz-m5w3TX3dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8c0pqBuv86B5o4nF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwhKi8u2AocnM9txmh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyukuUWV35wHY9rPSJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxvl5Twvqs4LZa2fiN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUqDcYkIVQe_2XnZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuyVH2tnw7jctwghp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7YR6HJqIRbmJk_aN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]