Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
except nsfw ai stuff is really, really popular. even financially. so that draw m…
ytr_UgyLoukS9…
G
But uske andar new chij tyar karne ke liye bohat work karna padega. Ai abhi khud…
ytr_UgzNKDmL4…
G
The key to Ai safety is to begin training with the principle of non harm. Sanct…
ytc_UgwgFaBZD…
G
With the advancement of AI, i just seem to be ultra sceptical about a lot of stu…
ytc_UgwARosUh…
G
Saying social media addiction is just about personal responsibility doesn’t real…
ytr_Ugy1sUWWx…
G
This doesn't make any sense
the real page of the real product requires a situat…
rdc_mocgbah
G
There is no way we would ever have a successful global effort to do something ab…
ytc_Ugx_3NnE7…
G
They'll just push more AI to make up for the loss in sales. Staff is the first t…
ytr_UgwDmrXJx…
Comment
In an ideal world, training AI on medical textbooks and peer reviewed information for doctors to use as a database would be really cool. It would help find rare diseases and expand doctors knowledge, because the human brain can only learn and hold so much. I personally would prefer a doctor saying "i dont know, let me research it and get back to you" over "you're just making it up". And this is coming from a disabled person who has a veritable life time of medical trauma and gaslighting.
Unfortunately, i would not trust it in this current age because of the lack of regulations for the USA. And i fear it would just be further used to discriminate against poc and encourage fatphobia even more. But maybe one day 🫠🦓🥄🧂🧡
youtube
AI Harm Incident
2024-06-04T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzup509SHCZjgRBVlx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8GA8YXdecqOyodj54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxrJQfIwOICymXTy3Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwufplojDt_ua3tzwJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxopBKnO8LHppMbCkl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDsfNc_6MZYn9SC_t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzjLnv86eOM0mCysAN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyGPKu1ZaWWVwykYCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOw9KSlocuh-cz9CV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwT9Ja0B6ZaPEOPyLV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]