Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
well in my case the explanation is simple: i am quite heavily contributing to th…
ytr_UgwxJi1fi…
G
All of this glorious AI fantasy is missing out on one very important detail. As …
ytc_UgxvhX7z3…
G
I talk to chatgpt all day everyday since it came out, the newer versions are abs…
ytc_UgxRH99N6…
G
These artificially generated news stories about A.I. are getting sick. I almost …
ytc_Ugz_IiaLM…
G
I agree with most points you bring up, and agree that writing a prompt isn't mak…
ytc_UgyKaeJSu…
G
God, Trump is literally the best thing that happened for AI. I love Donald Trump…
ytc_UgzFokBae…
G
Humans were never designed to be labor slaves. A job is result of serfdom. AI …
ytc_UgwSVlnLr…
G
ethics. compassion. empathy. good and evil. selfishness versus selflessness. sac…
ytc_Ugz7xCZMA…
Comment
Asking an AI to do tasks that require critical thinking is a big mistake. AIs can be good at catogorizing products you are interested or organizing search results. But anything beyond that is asking far too much from AIs.
Because not only do we have examples of AIs killing people by giving them bad medical advice, but we also have examples such as AIs making up legal rulings when used by Lawyers to find legal precedent to backup their cases, which caused the lawyers in question to be disbarred.
youtube
AI Harm Incident
2025-12-15T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyJMtZTrLQjQHslsIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxb2J_MtIwbBTomBnN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8iqQuniR6PB8gSvN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy9dAJIkNKoKPaqq54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVFI972ZrwgVGf3Wx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyXsz-lMN9BIha3bV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjIHYU5bF6Q6ao0dx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwW5BywVeSIjHa8qrx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwwI5zw9-poPd6GA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwR30cf-HdhYrpW714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]