Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ Do you think AI will have the motive to compete with humans? We need to avoid …
ytr_UgxJ0y0-R…
G
Congratulations, you got the right answer and are one of our 3 lucky winners of …
ytr_UgxstTWIY…
G
°∆ I believe we are meant to be like Jesus in our hearts and not in our flesh. B…
ytc_UgxG0NV6U…
G
This is a mistake. People need people. This kind of AI dependence will leave chi…
ytc_Ugxo0WHx1…
G
This is from a video game called detroit become human, this is the ai carer andr…
ytc_UgzKghY77…
G
ConversationalAI (ChatGPT is the best known one) can be built with a vast embedd…
ytc_UgwntYk5T…
G
Software developers copy code anyway. So what is AI going to do differently ,cop…
ytc_UgyuJ0xKX…
G
no you are wrong about this. he sis the only one being honest about its effects.…
ytr_UgwpbXJBO…
Comment
If you insist on blaming the parents for this, _please_ take a look at the court documents before you judge. They're available online for free. The transcripts of the conversations are disturbing, and ChatGPT clearly played a massive role in escalating the situation; at one point, Adam said that he wanted to "leave the noose in his room so someone finds it and tries to stop him", and ChatGPT _told him not to._ He clearly wanted to seek help, but ChatGPT continued to undermine his trust in the people in his life.
Anyone who says that ChatGPT would "never do this" hasn't read enough into the topic. Large language models actually have no idea what they're talking about.
youtube
AI Harm Incident
2025-11-14T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwTQdYPaO7mTBV6wFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxzfKzRCkKv9TinQG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwScyw5XLAfeU48tqZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGWvHo_AbcYD-Xqc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwzr_uNJRS2Y5YA0tx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxS-fxrqnsiczBLFkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxONfN4mBI_yHbS_dp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw01GNaxolLhXGJxEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYyYt3aw1-Dkg2kXR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU8SoXHz7uC8Fu4Qd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]