Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Matemo-rf4xz formula is just another word for algorithm. Shows will only be pro…
ytr_UgxHcuw_D…
G
If you asked to generate a female character it will always be the same one the o…
ytc_Ugx4f_ywf…
G
When he asked about the time flies like an arrow thing, i was looking away from …
ytc_UgyycBUTS…
G
How in the would can I let a robot babysit my child?That means I will raising an…
ytc_Ugyow0nZS…
G
The only time I've seen AI images as useful is for "disposable art". Like quickl…
ytr_UgxQdAuih…
G
I think ai is interesting and worth being built upon but stuff like chat gpt and…
ytc_UgzVBseSE…
G
Solving alignment doesn't save us from AI doom. I expect we'll create aligned AS…
ytc_UgxuiZUl6…
G
The most amusing thing is watching everyone on LinkedIn try to jump ship to AI r…
ytc_UgwECqJdp…
Comment
This is a bit of a tangent, but I feel like there needs to be some discussion on how AI was presented here.
I really appreciate what Chubbyemu does as a public facing medical educator, using these stories to explore both the practice and theory of medicine is a valuable service. Having said that, it REALLY irks me when people use thought-terminating clichés about AI like "it's just a tool", as if every tool is inherently neutral until the moment it is used.
I want to address out a couple of key points, first being the ease of misuse. There is a difference between a tool like a spoon, which you really have to go out of your way to harm yourself or others with, and a knife. This is not to say that all because a tool can easily cause harm we need to impose extreme restrictions on it, but it does mean that it cannot be treated the same way a tool that cannot easily cause harm is. In the case of AI, it encourages some of the worst instincts of the human mind. Contrary to the ease with which you can generally avoid stabbing yourself or others with a knife so long as you exercise reasonable caution, identifying, much less combatting, these psychological pitfalls is significantly harder.
Second is what I will refer to as "build quality". Did the creators of the tool take reasonable steps to ensure their tool is as safe as possible when used for its intended purpose in its intended fashion? If a blacksmith is using a brand-new hammer, and the head flies off and hits a customer, the blame falls on the manufacturer. Negligence in the manufacturing of a tool also influences how the tool itself should be viewed, as consumers must be made aware that tools from certain sources cannot be used safely even when handled properly. AI companies have constantly resisted taking responsibility for harm done by their tools, despite frequently having been warned or otherwise been aware of concerning actions made by their product.
Lastly, the accessibility of the tool is a factor. If a tool has inherent risks even when used properly, certain precautions need to be taken to ensure that those who cannot use it safely do not have access to it. If an infant injures itself with a knife, the parents are the responsible party for not making it inaccessible. I don't think many words are necessary to explain why flooding the world with a novel and highly controversial technology such that even those who want nothing to do with it have difficultly fully avoiding it is not a wise or ethical thing to do.
These are all very, VERY simple points that should be considered any time AI is brought up, but statements that disingenuously present it as perfectly neutral outside of how it is used by individuals are intentionally used to prevent these conversations. This only benefits those attempting to launder AI as a solution to imagined or poorly-conceived problems, while spreading confusion among a wider audience who generally don't have a grasp on what AI really is, much less the harms it can cause.
youtube
AI Harm Incident
2025-12-19T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz-lqQezSt27jJzmH54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwdCUxC1bVIF9ifwhR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxMgnxLPfP2dw5QTUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyTlqMp2w4-tphQ-1t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugy2PMa9cOJEbcLvwUR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzO0tD5wObEKn73hwp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyyJ8RkP9TiWm8wt7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzNWJ3A41GI70h8S9R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwntIJQBkdpnyJwhNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzXQZ0ysYXO5Z7S0AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]