Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
100 years? Only about 5 years ago AI was barely able to form a readable sentence…
ytc_UgwlJvbQ0…
G
Ok for all you glass half empty pesimists out there listen up. This is what is g…
ytc_UgxkQ3V2W…
G
The AI bros seem to think that lacking the effort/talent to make art yourself is…
ytc_UgxRjgD3g…
G
Maybe if they appoint new Tamil Kings they'll be able to kick out the undesirabl…
rdc_dy8cpcr
G
i think also something that no-one really brings up is: why would the AI bother …
ytc_UgxLHSjNq…
G
This isn't ai, she is pre programmed. This is insanely low IQ. It's a Chuck E Ch…
ytc_UgzVzkLa4…
G
@stephanos6128 Maybe. I'm not sure. I did because I had no choice. But I think …
ytr_UgzMnPX-9…
G
@ 51:44, jesus christ dude, she tells you democracy has about 20 years, and you …
ytc_UgzdnOH-6…
Comment
>My husband presented my initial symptoms of a rare disease (Anti Synthetase Syndrome) to Chat GPT in February. It took four questions (with him inputting test results from tests suggested by Chat GPT). It took 4 questions. In reality it took 6 months with the doctors being convinced the whole time that I had pneumonia (resulting in 6 rounds of unnecessary antibiotics). Finally a random test result came back positive. By then I was on 7 litres of oxygen.
>
>I'm off oxygen now because my husband spent the night after my diagnosis reading all of the medical journal articles on ASS that he could find and came in the next morning suggesting two medications. The doctors wanted to go through their standard meds for autoimmune diseases and three months later (when I wasn't expected to survive longer than another two months) they gave in. Six months later I was off oxygen.
>
>I was in the hospital in February and the doctors ignored my disease because "they hadn't heard of it." It was a dumpster fire of a hospital stay and I was discharged and am now terrified to ever be admitted again. I spent a lot of energy advocating for myself because they insisted that I just had pneumonia.
>
>Honestly, whenever I have questions now I ask Chat GPT 4 (I think of him as Gary) because I know it holds no unconscious bias and won't just default to things it normally sees every day.
>
>I can definitely see a future where doctors just need to review diagnoses given by AI. As long as there is a human reviewing things with an eye toward benefit vs. risk, I'm good with it.
I cited your reddit and posted a clinical case on Medium, similar to yours, to see if human clinicians could come up with the diagnosis. GPT-4 could definitely make the diagnosis, but not GPT-3.5. I am using this case to test other chatbots to see if they can solve it, **Case**: A 52-year-old woman presented to the outpatient clinic due to progressive muscle weakness, arthralgi
reddit
AI Responsibility
1686008549.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jktck42","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jn1yucs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jkoo1qd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jkpwk6y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_jl4yz17","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]