Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there must be some way out of here.... said the jumper to the theif..... its hap…
ytc_Ugz9CtP_G…
G
Yeah this guy is definitely on the side of the devil sorry all you elon muskrat …
ytc_UgwTjzE3b…
G
Quando sarà completata chiamatemi la compro io va benissimo per casa👉🏻 🌌🏠🚶🏻♂️..…
ytc_UgxZ42hXx…
G
Is NYT public info? Pretty sure you pay to read their articles and universities …
ytc_UgyXF65PZ…
G
What's increasingly clear is that America's cyber security defense is no match f…
rdc_ld5mkg7
G
19:42 Nonsense. When an AI drone w built-in lethality is deployed, the deploying…
ytc_UgzMQqqAW…
G
Sad days that we needed to see his face, just so folks could know they weren't l…
ytc_UgwFqsnRV…
G
I want to know your take on the Anthropic vs authors decision. Namely, the decis…
ytc_UgypJXf5N…
Comment
One thing AI needs to overcome are the patient's themselves. First, patients give really terrible answers about their own symptoms. Second, AI has thus far been programmed for confirmation bias- if a patient is giving feedback and looked it up and there is some type of a doc bot, it is going to have to go against it's current design of placating the consumer to provide care. Just something to think about.
I also think that even if and/or when AI gets to the point of doc bots, what happens with medical liability? Can't sue a machine, so you sue the company that manufactured or made it's programming... they will have physicians oversee that shit and make it so the liability is still on them, much like insurance companies do today.
youtube
AI Harm Incident
2025-10-25T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugy6USL2pNJUqwcfEE14AaABAg.9qrDZdl91AY9qrenF3Jxkt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzrtB7_KxezCDdfoMJ4AaABAg.9qr4PMvur4C9sbpC7bzAci","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyCfvGv1Qx7JgFKodR4AaABAg.9qr1oEjf7ca9quxS5tMO0e","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwHsIwuLSLs33rGGgl4AaABAg.9qqzlo9l1cY9qsQTANcDsK","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzTNWSquVApgdE-7lZ4AaABAg.9qqSALnE9MZ9qtM80Hk5x3","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgzTNWSquVApgdE-7lZ4AaABAg.9qqSALnE9MZ9qtPFuVTwsr","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugy2RUN4f8FFhtIA2A94AaABAg.9qqRwZ2cIwP9qvb2T6Uv37","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgykiRbezzk9MSRJIEt4AaABAg.9qqFBLeCn7b9qwvMRCG5PN","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgyLZNOXIklwW0GYFa14AaABAg.AL3Chi9mpzzAOhfG_qEp_O","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyLZNOXIklwW0GYFa14AaABAg.AL3Chi9mpzzATIFw2q91w1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]