Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Japan here, and in the same boat as everyone else. AI gets about 80% of the nuan…
rdc_ktat96z
G
.... faceMask and helmet?? why not just instaMakeup yourself to look like a fr…
ytc_UgyWt54KO…
G
Exactly. Companies could well invest into AI without firing people, but sharehol…
ytr_UgxQNtI0r…
G
Why would this stupid AI give out information on how to commit suicide? Why eve…
ytc_UgzrBq72e…
G
Feel like people forget, in order for AI to get better and better, it needs to b…
ytc_UgzyDVGm2…
G
Considering:
-Evolutionary Intelligence
Natural-
...
Artificial-
Alienating-
a…
ytc_Ugy6Qbnop…
G
To not even ALLOW state or local governments to create ANY LAW regarding AI is f…
ytc_Ugzx_3uUS…
G
This is the future. Everybody is going to be AI augmented. Many people will even…
rdc_moc7xuv
Comment
But there are still a lot of ethical concerns I have about this study’s significance. Why wouldn’t hospital execs see this as a reason to incorporate/undermine pharmacists or doctors by claiming their job is not much more advanced than a relatively cheap ai? Furthermore, who is held liable if the ai model suggests something along the lines of malpractice or negligence or continuing racial biases against POC? Silicon valley is full of good faith but racist products like when HP software couldnt detect black people. 🤨
youtube
AI Harm Incident
2024-06-03T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugye_JLMqlXmqeR61bB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2BI9Cy0wlWUKOz5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKlcKYgf_2d5Tdc5F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweOImXJr_z3SVZr8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5GBQ6zjbBtY8VZnB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqjsE93ZKwzJOB5LV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyY3PWsRb7T43cVzox4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWQR2_56GXxlVACMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0XbwgVUv6wKvVYhZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz-qwhBvJ660D--uXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]