Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@whitebread3872(THIS ISN'T AIMED AT YOU
It's just another comment that I made to…
ytr_UgweWq0Sq…
G
The thing about LLMs that so many people either ignore or forget is that they ar…
ytc_UgyLDpsV3…
G
When it got to the part about simulation my mind was blown - I never realized we…
ytc_UgyCOeY1C…
G
"i want to use ai to make my writing better" brother it's called learning gramma…
ytc_UgziEUte8…
G
Perhaps, in a different universe, what might happen would be that a very large g…
ytc_UgyS0wseL…
G
What worries me the most in the current trajectory of AI and robots is the capit…
ytc_Ugz9HDU1e…
G
Link to the tweets?
The main objection from the moderators appears to be that,…
ytc_UgyqCVLUU…
G
Answer: Not THAT real. Obviously there are steps to take, but no AI is going to …
rdc_jmfttq5
Comment
Right. It's always the music or games and now AI. It's just anything but the parents or mental health. If he felt like he had to turn to AI to chat, it's almost certainly because he didn't think he had anybody else to talk to.
AI just replies based on prediction and its LLM backing. It doesn't "know" what any of the words mean, just how they fit together to make it make sense. Can't possibly know intention or the gravity of what's being said. These people trying to blame AI for personal failure is the most ignorant thing I've watched in a long time. Can't even grasp what they're saying or how the tech works.
This is what news looks like when nobody knows what accountability or responsibility even means. This is why in our society you need warnings on f'in plastic bags not to put them on your head. Because somebody out there sued for it. Let that sink in. Now we need a free chatbot app to be a qualified therapist, otherwise it's a lawsuit? People want flying cars but have to google how to open a door. Good luck.
youtube
AI Harm Incident
2025-11-09T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5e4gDWmYWMDGfK0d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQSU-zKvJif4AB0et4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXNjYbjTQ2y2NlSlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwAoAe6CCIZnW6DWLZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx01Tk4XYJFgJw8ZUl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxLoA5eVUDH48UCRld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMyIG9MuPalcC-9l14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGzr_xDqerMLXhfkN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwlK4owhbFt3gPu1D14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx1WNzhzMN-9oapu5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]