Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with killer AI is most of the “Terminator” sequels sucked. That’s th…
ytc_UgzSHVmNN…
G
There's no camera system currently available for ADAS systems that can match the…
ytc_UgxBeQaSs…
G
Please stop saying "AI Art". It is not art, it is a computer algorithm recklessl…
ytc_Ugx-b7obY…
G
Notice how the map of Aurora’s coverage goal only covers the South? None of thes…
ytc_Ugwc48KWL…
G
What is the meaning of her last sentence in the realm of a better development of…
ytc_UgxxDioq_…
G
"Artists should just quit, theyre useless." Hey bud, where did your algorithm ge…
ytc_UgxESPB3s…
G
We got three components. Body, Mind and Soul. Industrial revolution took physica…
ytc_Ugyl8mIbi…
G
Stupid, can't knock out a robot with a fight. Intelligence wasn't part of the fi…
ytc_Ugwwr36VS…
Comment
They need to implement and emergency method like the police do with tracking your location when you call. If someone is using language that leans to that side of violence, there needs to be a protocol that reports, AT THE VERY LEAST, the person to the authorities for a wellness check. It’s insane they haven’t already done something like this bThis is currently happening to a composer and podcaster, in real time. Can’t recall his name but he’s a British man, who did a podcast w his American wife. They’re going through a divorce and he decided to use dr*gs again, for “creative purposes”. Along the way, he began to say pretty unhinged things about himself and his ex wife, hacked into her social media accounts and changes photo captions to be insane, and was convinced he’d found this music method (using Ai and getting gassed up by it) to cure autism and communicate w his autistic son. There’s a lot going on there but he’s shown his chats and it’s new levels of danger.
youtube
AI Harm Incident
2025-11-08T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzPz2quVt4zowSXJ4Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4VWS9GH6HTQoXupd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy0AQdPw3eKWyMYugZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRtyTYN9AUKO-kmDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8IWmIa7yCyOQzqTN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyjJXXMOxOPV8NfjmR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-9IRJ3He0h5uN5YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnaTewbUkBUx-2xGl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIHn84JLKywju_MRZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugza1Qo8c1Prfo4hwK14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]