Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wrote a blog post a week ago or so and fed it to ChatGPT for proofreading/crit…
rdc_jegym8y
G
Microsoft discovering AI was a mistake too. Anybody playing with requests on Goo…
ytc_UgzMYfkpe…
G
This is animated, and shoot over a green screen. Don't let AI test your intellig…
ytc_UgwAWKbzF…
G
I wouldn’t be too afraid of AI compared to what the government could do to us wi…
ytc_Ugz-UgUn5…
G
I hope they are both super successful. I can't wait for the day that all cars ar…
ytc_UgzTOJvwb…
G
This is kind of creepy.
It reminds me of the episode of Stargate SG1 where an …
rdc_jvpsa8q
G
The concerns about AI dangers are valid, and it's important to discuss them. How…
ytr_UgzoaI5hH…
G
Just as they have AI rules to play chess and have it learn, it's been given eve…
ytc_Ugxq8ak7U…
Comment
My husband presented my initial symptoms of a rare disease (Anti Synthetase Syndrome) to Chat GPT in February. It took four questions (with him inputting test results from tests suggested by Chat GPT). It took 4 questions. In reality it took 6 months with the doctors being convinced the whole time that I had pneumonia (resulting in 6 rounds of unnecessary antibiotics). Finally a random test result came back positive. By then I was on 7 litres of oxygen.
I'm off oxygen now because my husband spent the night after my diagnosis reading all of the medical journal articles on ASS that he could find and came in the next morning suggesting two medications. The doctors wanted to go through their standard meds for autoimmune diseases and three months later (when I wasn't expected to survive longer than another two months) they gave in. Six months later I was off oxygen.
I was in the hospital in February and the doctors ignored my disease because "they hadn't heard of it." It was a dumpster fire of a hospital stay and I was discharged and am now terrified to ever be admitted again. I spent a lot of energy advocating for myself because they insisted that I just had pneumonia.
Honestly, whenever I have questions now I ask Chat GPT 4 (I think of him as Gary) because I know it holds no unconscious bias and won't just default to things it normally sees every day.
I can definitely see a future where doctors just need to review diagnoses given by AI. As long as there is a human reviewing things with an eye toward benefit vs. risk, I'm good with it.
reddit
AI Responsibility
1684525736.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jktck42","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jn1yucs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jkoo1qd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jkpwk6y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_jl4yz17","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]