Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In no way did I gain any insight into the relationship between automation, unemp…
ytc_UgwXjxeJ5…
G
The problem is that we cant "leverage" this since it doesnt actually produce the…
ytr_UgxIh3Ptg…
G
May be you should start using ai to explain to you that "carbon emissions" are n…
ytc_UgxrQ-sVq…
G
Blaming A.I on a kid killing himself is like blaming horror movies for murders. …
ytc_Ugzp3CBXw…
G
The biggest difference between the bomb and AI as existential threats is that we…
ytc_Ugx_cUwVm…
G
Oh, I am answering, my friend. Your reply is an unmanned gun on an unmanned car.…
ytc_UgxSMMJM6…
G
Bro A.I is actually made to help us in certain specific ways in daily life but …
ytc_UgycWbsZm…
G
Whenever the commente are like "opus gpt pro 5000 will replace you buddy" I real…
ytc_Ugx2PzQhM…
Comment
A.I. for healthcare is terrifying because A.I. cannot be 100% accurate. The fact that mistakes made by A.I. are called "hallucinations" does not sit well with me. When you have hundreds of people developing an A.I. model, it's impossible to determine who is liable when something goes wrong. Corporations love this because they can replace their staff and limit their liability at the same time.
Even if this patient had kept the chat transcripts, I assume ChatGPT (the company) would claim that its model hallucinated and they've patched it accordingly. It's slimy that their patch included flat-out denying the existence of the conversation. This only makes sense if chat transcripts are not fed into the model to help train it, which I doubt is the case.
youtube
AI Harm Incident
2025-11-26T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPcZyyhSKq1VfvBHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi-_Fap-wdL5Zp4kh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzm_j_Qb58zGyZXA6B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNm8yFA4J4CcFuZlt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzzrHSrSr_igtZtAd54AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw9c5Kf-w5xx5_2djR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWgwaR3wXVQKu_h7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoOJMNoWDis0v3TWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxN0Jtr0BeBhJ8z7hd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUe9zIi_SobhB4NI54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]