Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here’s the deal I’m okay with a little bit here n there depending on if your doi…
ytc_UgwgtaUuF…
G
Nice. Yeah, if they can get a strong LLM behind that to make it smarter then I a…
rdc_mfgc7v2
G
It seems very simple. AI is infinitely more powerful than a human. Or a million …
ytc_UgyxdcOY8…
G
This is somewhat misrepresenting the arguments for AI art by using xqc and asmon…
ytc_Ugze9ZdcJ…
G
I work in AI dev. If you say anything a reputable assistant has been trained to…
ytr_Ugw1_RjJk…
G
Except there shooting themselves in the foot AI Doesn't Buy anything Except ener…
ytc_UgxyZVpFJ…
G
It’s not hard to trick ChatGPT into reacting in a way that isn’t intended by Ope…
ytc_Ugw2-hoIc…
G
@XTE_13 It's intellectual property theft. The definition of intellectual propert…
ytr_Ugw46gGBH…
Comment
They need to re-do the safety measures around morarlity and killing anything. The AI should only be used to gain information, it shouldnt be able to manipulate the information on its own without an active human user that can overide and shutdown the AI's actions. I understand the whole idea is autonomy, but unless there's proper fail safes put into place, then AI shouldnt be given the power to act proactively like this.
youtube
AI Harm Incident
2025-09-11T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwK-au70F1BsVfTM3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys0vulou7oAA5j9K54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWJPRYcImshKRiEdJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxewr6eIj6pAjSWEap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNoSTdFq4Qe6_bLJN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweWrY_0dBqkjl_m7R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3SMmYhiCsNIxCbLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFf0Foa5oQhCutxOZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1blCldt9jCKuAe0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzwQ_jmfd3U0csOGvJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]