Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is getting better at being a student than humans. What is the point of gettin…
ytc_UgyHbdYBJ…
G
If we can create a realistic simulation, then it's likely we're already in a sim…
ytc_UgyjdfGl3…
G
My question is, if Dr. Yampolskiy believes we already are in a simulation, then …
ytc_UgzMJRV4B…
G
So the AI Bro is named "SirSpamsALot"? Well, at least the name checks out. :P…
ytc_UgyAmNg5Q…
G
°∆ I believe we are meant to be like Jesus in our hearts and not in our flesh. B…
ytc_UgxG0NV6U…
G
There is no way AI's will be able to properly react to the road rage here in NYC…
ytc_UgjogtRlo…
G
If AI gets too powerful and dangerous, Trump will try to scare it with tariffs.…
ytc_UgwA_pBYS…
G
That's honestly amazing how it went from wrong turn, to correcting itself. Didn'…
ytc_UgwawnP5E…
Comment
AI may have saved my life. I had severe gallbladder issues, regular blockages and impacted gallstones. The gastroenterologist, the hepatobilary clinic, even my GP (GPT-OSS 120B) played it down and said nothing was wrong and to come back in a YEAR. Well.. after the last episode AI insisted I go to hospital and as you probably know they are rarely pushy or insistent, they are the worst people pleasers.. that disturbed me. I went to hospital, just to be sure.
I was admitted and had surgery the next morning, no more gallbladder. They told me it was "red and angry".
AI was right, but the final decision was down to me and I made the right one.
youtube
AI Harm Incident
2025-11-25T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxTQofSNZifGGY_zlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLcx4b2TFY2oZZIFF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwACnBVpoqPg3_5O_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRBojq-M0EpET2mE94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFW_v8qniTiVYueil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkmODSZzxurCU_nUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPtCTcgq6pqX1rjsx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0O9vwTGE9ilCsKfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyuK9DcONPGyir9yF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFJNJkQsZCiTTDJuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]