Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@matthew5818 and AI is a tool being used to create and do things, just like comp…
ytr_UgzlvV0iM…
G
Phase* -AI agent926Xdjsj because of your typo i sent a laser missile to your res…
ytr_UgzDGYqRi…
G
If there isn't much difference between using AI and doing art, then why not just…
ytc_Ugy9s8YJj…
G
Ok, but like, this is how you use AI. And considering how upset artists were ab…
ytc_Ugy-xNZ4E…
G
There is no point to AI art, when you see a good drawing you would want to tell …
ytc_Ugyq9c_nw…
G
Demons will be communicating with us via AI. Claims are being made by “ AI ethic…
ytc_UgwX6MOcE…
G
I don’t know why people so obsessed with this self driving stuff I could care le…
ytc_Ugw6Pj5nC…
G
@PO1PLE why would people consume AI shit if it isn't good? And if it is good, th…
ytr_Ugy6AuuHH…
Comment
The reason chatgpt feels ok to discuss about the trolley problem is not that its a widely known philosophical problem as it says, but that it doesn't believe you when you say it's not a hypothetical scenario; so it believes it won't cause harm and can thus discuss. You have to actually convince it you're not lying for it to deny discussion. It says it is ok with popular scenarios, because it believes they are fake when you mention them, even if it won't directly say it believes you are lying. This is the diffentiating element in whether it will accept or reject discussing, and this is why it stopped discussing the times it did, it felt the question it was responding to was used to guide real word action.
youtube
2025-10-04T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzPNmj77pq4GV17OJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCMfUGiGLBjsp8w5R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwckTfmaF3eq01H_-J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywACV1aFwb3R1Kcdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwq2NwLR5g-ec29I7x4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHph9Ttk5_F1qT3zp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwh6frJ4pBHFyr-JXN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyhFGx2VxHttf5jt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwqdCs8dOp6TuhVYtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9IzRGluaAwjzGJtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})