Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1 person should belong to 1 robot. There should be no 2nd robot. Each person sho…
ytc_Ugygj31d9…
G
I really wanna be mad about this, but I also think it's at least in part because…
rdc_oh98075
G
You people are just the very problem the AI will be dangerous for.
Jeez. You're…
ytc_Ugyejw4k3…
G
I think us artist can survive from AI art, AI doesn't have the feelings or the t…
ytc_Ugxz9TCd5…
G
High level anxiety question from Hasan, the answers from Karen are weak and extr…
ytc_UgwPRzmvd…
G
when everyone says "they" do this "they" do that, u ppl know that "they" are not…
ytc_UgzZ9NLYU…
G
The fact that ppl are in the comments arguing over a robot proves that we defini…
ytc_Ugz8mYat6…
G
Me who either befriends the AI,makes it an enemy,makes it annoyed and confused,o…
ytc_UgyoSwdPB…
Comment
I remember reading about this on Ars Technica. We are in so much trouble.
11:40 edit: The way LLMs work does not allow for anything to be hardcoded in. The safeguards run separately, can only be added in a piecemeal fashion, and can easily be bypassed.
14:02 edit: It's unfair to say he misunderstood the response. Chatbots like this strip any facts (eg, bromine can be substituted chlorine) from the context (in this example "in chemistry"). They fundamentally do not understand anything. Misunderstanding can only occur when the other party is trying to convey an understanding.
youtube
AI Harm Incident
2025-11-25T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwpgWXvByA7yIkNIOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnBc6Cc8Rw_e2ml2d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFKZt-JGm7PR4HVOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw8rAc-HGwAKGhIB854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzSHipi1CmuhZ2jS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMuzm2ehIwhS5dPEV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx47AUM71m-t7wngRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-JCANSVo1HV-EYst4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-LzBNfKrVcLinjWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXbgdIqJUHfVEFYAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]