Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My personal tech :- Take a screenshot and send this to meta AI
Say to add this …
ytr_Ugy4xAWtI…
G
It’s a tool but a scary one that may already have too much manipulative power to…
ytr_Ugz2XgGhd…
G
Why would I ever pay for a human attorney, when I can assign 2 super robot attor…
ytc_UgxOFIIk8…
G
Buddy read the latest research paper by anthropic on LLM models. You will unders…
ytr_Ugx3Ni37n…
G
My biggest issue with the doomsday scenario is that I have yet to see AI be trul…
ytc_Ugy8QrI0L…
G
This has been simultaneously very depressing and exhilarating; what a bizarre st…
ytc_UgzHV_Gev…
G
Ok so it's a job that doesn't pay enough and people don't Wana do it... Perfect …
ytc_UgwEMhXDt…
G
Biometrics and remote identification are still allowed by law enforcement, so th…
ytc_UgyadlClK…
Comment
We should have a publicly and freely available medical GPT model trained on all cases, diagnosis, prognosises, outcomes of treatments , pharmeceutical and medical databases possible, worldwide. We should also have a secondary model which has been trained on ALL medical research done. It would revolutionize health and healthcare, but would eat into the profits of a lot of companies.
It would give folks a chance to get an idea of what they might be facing just by describing their symptoms to and asking questions of a completely anonymous robot.
Once the GPT and user have reached some potential avenues to follow, they could then be provided a data sheet with recommendations on what kind of Dr. they should follow up with in real life, as well as a pre-preliminary diagnosis based on the conversation. The doctor they then see in real life could utilize this data sheet to get a leg up on diagnosing the patient, potentially leading to quicker and more accurate solutions to patient health problems.
youtube
AI Harm Incident
2024-06-02T05:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxqmB9cm5vEv9lRPHp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxN0x1Ro9kx8u1ze2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNpa5MvdvCb4y1zvx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwn1haP5cFdZs_AVdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzALG3IsMW84-QK8mx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxAGTyn9oJjAhXIKf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKuWKe7gOnWKhzKFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz3uwR2lq-exdRDpfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZgRuG7A6JnqqdW_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzvwsov4jxXzWM8i014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]