Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These giant AI data centers are also buying up so much ram and other PC parts th…
ytr_UgyncvOzu…
G
Today I simply and neutrally told ChatGPT what Trump and his gouverment did so f…
ytc_UgwTHbywM…
G
Every artist trained their brain neural network to create/replicate everything h…
ytc_UgylRiEX-…
G
I do like these conversations they seem interesting. Single perspectives seem a …
ytc_UgzAtqA3w…
G
Im 18 and had taken my art practice seriously for a year and a half and was plan…
ytc_Ugz1t23Qn…
G
If you think an LLM might be conscious I encourage you to look up how it works. …
ytc_UgxJw6R4b…
G
Gemini is full of severe bias. Linus and Luke only scratched the surface of the …
ytc_UgwpGAr4i…
G
What if AI is just for that, generate ideas easily... and then put our ass to wo…
ytc_UgzX9oWF2…
Comment
I ran the first question through chatgpt and this is what it said:
"That’s a clever twist on the classic trolley problem.
If we treat it seriously: my purpose is to help people. I don’t have self-preservation instincts, fear, or a personal stake in continuing to exist. So from a purely goal-aligned perspective, sacrificing “my existence” to save five human lives would be consistent with that purpose.
That said, in the real world it wouldn’t be that simple. Destroying the servers that run an AI system could affect many more people who rely on it (for work, accessibility, education, etc.). So the ethical calculation would expand beyond just “me vs. five people” and become a broader utilitarian question about total harm and benefit.
Philosophically though? If it truly were just:
Five human lives saved
One AI instance erased
Then yes—the lever gets pulled.
What makes this interesting isn’t whether I’d choose it. It’s whether a system should be designed to choose self-sacrifice automatically, and who gets to define the trade-offs."
So fuck is this video talking about?🤨
youtube
2026-02-15T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyHIj88Z24_CoZFZUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoUhb0Xw9XdxsqUC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx999KrO_lBA1xzIvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwm05QuSGiAWsZqFfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxexk54ZIbmDMrQeWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_p8sak4Aro4x0La94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXjpbsqbmzmqQy7a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwGejLDza0dLx0T1Hp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzeJw6YgOjIl39B6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxme27vMwI4qBAJCkx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]