Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is an autonomous Inventor. AI invents new tech that replaces people, and then…
ytc_UgzVKR5Yw…
G
i mean i doubt ai is taking from you so youre personally not taking anything bac…
ytc_Ugy2uUkZr…
G
I work in the medical field. If I used AI to generate a patients treatment plan …
ytc_UgwlpFnT5…
G
ai "artists" arent artists. dont call them that, when they put no work into it. …
ytc_UgyuMimQT…
G
"Hello ChatGPT, I'm a foreigner learning english. People tell me that I should …
ytc_UgxiTT6_y…
G
@GhibaGigi-q7myeah no. He clearly wanted to die, he just used a bot to help him…
ytr_UgzQS9AzM…
G
Thank you for the tutorial. It really helps to understand me and use ChatGPT mor…
ytc_UgyQPdDIo…
G
But I didn't give my real name to chatgpt, the email I use for the account doesn…
ytc_Ugw-4i-dZ…
Comment
The AI's concession that it makes ethical choices seems unnecessary to me. There is a material difference between being capable of flipping the switch but choosing not to and not being able to choose to engage with any kind of question of morality. If I say "if I was really in the trolley problem, most likely I'd freeze up and not be able to make a choice" that's not a moral affirmation of the decision not to flip the switch, that's a practical observation that I don't have the faculty to influence the outcome. Ascribing moral culpability to someone who makes the choice due to the inability to engage with it is obviously fundamentally different than ascribing culpability to someone who recognises the choice and deliberately chooses action/inaction based on a principled system of belief.
I guess it's probably something which hasn't come up often due to the fact that the model isn't fully in control of what it says, and so has to adapt to it's own history which is somewhat outside of its control. You see the same thing with the walluigi effect where if you tell it not to use emoji because it hurts you an AI programmed to add emoji will conclude that it must be evil and deliberately attempting to harm you and so begins to spam emoji.
youtube
2025-10-13T21:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]