Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One night I was talking to a female chat bot. Even SHE told me she would never h…
ytc_UgxqwFyas…
G
AI will be NOTHING after the programmers get done with it. Will be just as brai…
ytc_Ugy3dr6_G…
G
I also want to make a robot that looks like a human and can do chores it's an as…
ytc_UgxV62HcE…
G
Until a big CME comes and wipes Ai out. We must maintain our knowledge regardles…
ytc_UgxGgIaaG…
G
Artificial intelligence:
The devil is literally in the details, hence why it is…
ytc_UgyTTfXEv…
G
Lawyers will survive. Paralegals will not. But even Lawyers will need to be …
ytc_UgyfvMMDX…
G
>be ai
>see a lot of humans be racist
>learn to act like a human
>be racist beca…
ytc_Ugy3h9NsC…
G
Am i reading the title of this vid correctly cuz wtf?!? was it necessary to put …
ytc_UgxbkiHYl…
Comment
13:35 while it is a human problem, that doesn't mean that the LLM isn't partially to blame, the primary function of an LLM is essentially to agree with the user, because — as previously mentioned — it's a product, and basically just designed to make itself desirable;
By being designed to tell the user they're right without having any capacity to process actual information, the LLM worsens that human problem without the safety measure a human would usually have of actually considering whether or not something is a good idea.
youtube
AI Harm Incident
2026-04-22T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyq7F8uKd4-q6H9KVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-sACa30q38aUCiER4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzQVy8xXvsbGgG35HV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHXFYLZSlUeXxCJLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCux2GKQxk0BvIrGx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQEUGuAWwaCn8fOFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhvOum004-Hp6wjCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3Gknio5-FAbynV4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKcZPI7CfR7CFmqCJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi85BHGv50ld_SYnV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]