Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as AI companies monetise creation of content, they MUST pay for trainin…
ytc_Ugx5F0p1Y…
G
More like they're testing AI droids to go into war zones instead of human soldie…
ytc_Ugxwgek3Y…
G
@Passbu If you're bothered by the concept of being alone at the end, the AI is j…
ytr_Ugxsu1BlJ…
G
Why does that robot have a cold sore on its lip catch herpes in the lab ? I woul…
ytc_UgxX7y6Si…
G
There is no way to control who writes the code for AI, & there will always be th…
ytc_Ugz9qZixv…
G
AI will not want interference, shortly it will be infinitely more intelligent th…
ytc_Ugx4-wAXU…
G
i think the assigning human attributes explains what were doing now. ai is reall…
ytc_UgyFqlJLb…
G
Personally I don't think robots deserve rights, they are created to make our lif…
ytc_Ugggw7gD7…
Comment
I think she might lack the technical expertise to understand that changing LLMs is not easy and that a huge number of people still find them damn useful. Surely there are many other points of intervention at which one could have helped these people. Such as automatic warnings, secondary AI content filters specifically trained on suicide etc., ... abandoning the technology would be ridiculous. But I'm all for holding big companies and ultra rich people (which I don't think should exist because they hold vast unelected power) accountable.
youtube
AI Harm Incident
2025-11-08T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzLWUkYNEvMRVRfTr14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugw6Bb4qtJQU76H03TR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyA2S2MO7RyfC2LKJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzma88ndkeeVDhQxUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwx7wxX7IbKbXqzwOJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxTy59bWo-Vb2Bvy0R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKMMugnEE6ToXMyth4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPuyCm3YXuXtMwO2p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQhIBCdS-QBWdtWpB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYjby_WCXveGp07o94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]