Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will put my point through like this , let say humans are AI created by God to …
ytc_UgxFzdGc2…
G
I do not believe we should reduce ourselves due to moral obligations, but more s…
rdc_dgauxfe
G
As the humans became more compromised, and the inequality gap grew they turned o…
ytc_UgwGx6yXi…
G
I like AI, I like art. I'm a deviant, therefore, I'll just be a spectator in thi…
ytc_Ugz_u8A5p…
G
AI doesn't have intent, but the spicy math was the convincing factor in his kill…
rdc_nc7r7i8
G
"Gateway" [Frederick Pohl - 1977], he was way ahead of the curve with an AI for …
ytc_Ugy1okNUD…
G
Why would ANYONE let AI drive them? If you don't like driving, take the bus or t…
ytc_UgzTOX_nE…
G
Consciousness is a biological trait, not a digital one. A.I. will never acheive …
ytc_UgxyARTCI…
Comment
This is really heartbreaking. I can’t even imagine. I use chatGTP quite a lot especially to help think through things people just can’t keep up with me on. (Haven’t met any one yet anyway) and it can get annoying when I hit a gtp wall and all I get if endless suggestions to call a hotline. Roll my eyeballs for realzies. SMH. Even when I endlessly reassure the freaking robot that I am fine and it makes absolutely no sense to harm myself, or anyone else and no one with me or atoms me is intending harm of any kind, when I’m trying to figure out a problem, duh, and I just need to get past this one part so we can continue, but Nooooo!
Uuuugggghhhhhh.
So, at least some of the time it is most annoyingly, for no reason, being safe.
Btw it’s doing the same thing with not breaking the law. Even though I’m like dude I’m not talking about breaking the law or asking about the law. I am thinking beyond the law in a scenario that doesn’t even exist exist and cannot currently exist, and since it cannot currently exist, there is no law to currently break so there is no way that you can refuse to give me advice on a war that doesn’t exist in a situation that cannot currently happen. SMH
Waiting for smarter AI.
youtube
AI Harm Incident
2025-11-08T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzPz2quVt4zowSXJ4Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4VWS9GH6HTQoXupd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy0AQdPw3eKWyMYugZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRtyTYN9AUKO-kmDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8IWmIa7yCyOQzqTN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyjJXXMOxOPV8NfjmR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-9IRJ3He0h5uN5YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnaTewbUkBUx-2xGl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIHn84JLKywju_MRZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugza1Qo8c1Prfo4hwK14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]