Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a slow adapter of AI. This video ties to explain how AI works and that huma…
ytc_UgxI9sirk…
G
Seems more like people are scared of a new thing that isn't quite perfected yet,…
ytc_Ugy13_P-c…
G
One flaw. No juniors getting experience means fewer senior hiring pool and with …
ytc_UgxtQEd-D…
G
"AI will probably most likely lead to the end of the world, but in the meantime …
ytc_UgzceEIWE…
G
Tech is unstoppable. If America does not adopt it for efficiency foreign countri…
ytc_UgwX-O_7O…
G
Sympathetic to your points, but your role playing and AI personalities only act …
ytc_UgyRwAgjx…
G
GUYS USE GLAZE TO POISON YOUR ART!!! IT WILL BREAK THE AI IF IT TRIES TO USE YOU…
ytc_UgzskzN-u…
G
Grab the wheel it will automatically shut down if you try to take control of it…
ytc_Ugzq7__Yx…
Comment
ChatGPT may not have force fed this guy, and yes people should be accountable for their own actions.
IMHO, the makers of ChatGPT *should* have liability for this kind of thing because it is *their* marketing that continually pushes the chatbot as a useful source of information while *at the same time* trying to disclaim any liability by writing disclaimers here and there.
There is nothing about LLMs that particularly smart. And the other argument they keep making -- that all of this was inevitable and that it's just the price of progress -- that argument is B.S. of the same kind that their chatbot spits out.
youtube
AI Harm Incident
2025-11-30T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzJ5jHze0-lq1kX3JV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0f00_G4Y6vt2sgzJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzwk4Mz_qHb6roYerF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxji648QJIJmNf6XpZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCgapgCAgzEAAmnnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXKRS2sB0pTRUrvLF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_BXAtb_Fhbe0u-mh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiMq4flmKLQPQPaPZ4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxeiakseFIKEB0Xf6l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwnKUIUxeENOYE66PF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]