Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, one time i spent an eternity with waiting to cross the street walking …
ytc_UgxgEL_kn…
G
To me AI is just another software program that runs on hardware! It is just on a…
ytc_Ugz2lvHOZ…
G
This shit is as hilarious as it is stupid, meta-data exists for nearly every pie…
ytc_Ugzn9I5sT…
G
You spent way too much time defending AI. LLM-based chat bots should never be re…
ytc_UgwY7-7Dy…
G
This guy loaded the Ai with suggestive key words. This was not the first conver…
ytc_Ugx03vjjj…
G
I play video games all day but yesterday I drew an image of my favorite characte…
ytc_Ugy8uY3Uo…
G
Are we just gonna ignore the fact that the thumbnail said the robot was hot?…
ytc_Ugy7_gTWa…
G
Perfect behavior. Every school would be like this if they could control who was …
ytc_UgywPeT0E…
Comment
I agree that it is a human problem. Even without AI this could have happened by the guy reading a chemistry book, realizing the two chemicals are similar & coming to the same "conclusion". People were doing all sorts of stupid stuff like this before AI was even a gleam in it's programmers' eyes. Why do you think the world is full of disclaimers on everything? Cuz some fool did some thing considered "common sense" not to do & then sued over his own stupid actions, or in some cases the fool's family sued, cuz the fool died from his stupid actions. Not long ago, I picked up a bag of sliced pepperoni in the store & on the bag was an explicit warning to not eat the plastic bag. I think the old saying is true "Try to make the world foolproof, you'll just end up with bigger fools". Companies adding warnings to AI responses probably won't keep people from doing stupid things to themselves based on "advice" from AI, it's just a legal "Cover Your Ass" for the AI companies, but at least they seem to have learned a small lesson about how their AI can be bent/abused by the foolish.
youtube
AI Harm Incident
2025-12-30T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgySlL8roSxtlzPIYi14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwk6wkmzZEX9mPZU7B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuLDuv2-Hnd3VNZRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGXZhWZheukNyZAE14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyXKPKWS4mvMM7Jxt54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxTKXSrNkMYiM9-qUp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbMVil7EYaxL7HoS54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzCgv_dnrk1akStjbR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjllpAF_SKvephINN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymXVxRYrNOEa1mciR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}
]