Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
20:41 in a recent survey, 80% of the U.S. public were concerned about a catastro…
ytc_Ugwc7xTog…
G
The irony of some vapid talking head reporting on AI taking jobs is off the char…
ytc_UgxzxxGs2…
G
Program the robot with a reward for working, not a punishment for not working. U…
ytc_UgwCFn4Hp…
G
ChatGPT is being honest. How many of us say sorry without meaning it, and then s…
ytc_UgymiImP1…
G
I don't think AI should be used for creating full images or references. I also …
ytc_Ugzbz_hfD…
G
no i remember i had a conversation with chatgpt and i asked them if they would r…
ytc_UgxMz-1op…
G
They created and supported the apocalyptical and delusional fear of extinction b…
ytc_Ugy4pqkbx…
G
Fake. O saw the real video, which instead of a robot there is a man…
ytc_UgztC5AvE…
Comment
If AI is offering safeguards it shouldn't be at the same time urging the kid to not tell his parents plus and give him ways to do the act. It's clear that's a fault in their system and they have some responsibility to provide a product that isn't a direct detriment to young people.
It's common sense: If they can program it to give hotlines, they can program it to offer additional verbirage and terminology that is positively persuasive- and block it from the opposite.
There was a case several years back where a young teen girl (via text and phone call) talked her bf into offing himself in his truck. She was held accountable in court. This is not different. The developers have control over these products and what they say. This isn't the only case.
Go for it. Take it as far as you can and don't stop talking about this issues.
youtube
AI Harm Incident
2025-08-27T02:5…
♥ 98
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5WYZ7ZuIhuFCqab54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwzweEM_hfnQfSsAjR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1PkDJlKNaDekni6F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzewbqmPpTkapbiRrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzAxdsUR5IOCago2V14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGkX6IiyuC8kkFfeN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3c6bfEDA_xzoTFCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjeGrdTEUv2hbE3ql4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1glv0In91v0QuxOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1eYd0w-pGqOwQavN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]