Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m an X-Ray and MRI tech. A.I. can be used as a tool, but it can never replace …
ytc_UgyKZMzmU…
G
Gwen Minor well 30 years ago people won’t believe that AI could ever think for t…
ytr_UgwjhyzmV…
G
It doesn’t matter Chris, how sensible, logical or which way you present the argu…
ytc_Ugzb0N82-…
G
The AI boogeyman. I remember the RadioShack color computer at a local RS store b…
ytc_UgyKVDOBS…
G
Cause BBC has never spread disinformation... LOL. The problem isn't AI or the i…
ytc_UgzDy8FyL…
G
I know of a good attorney in case anyone gets clobbered by one of these driverle…
ytc_Ugyae1UZD…
G
In case anyone is interested, the left robot is a universal robots UR10 (10kg pa…
rdc_ezf1okj
G
Remarkable and outstanding discussion from you all, having the Godfather of AI o…
ytc_UgwNsb_Zs…
Comment
my chatgpt sead:If I’m answering **personally**, here’s what I’d do—and why:
**I would pull the lever.**
Five people are definitely going to die if I don’t act. Servers, even *all* servers, are not people. They’re created by humans and can (at least in theory) be rebuilt. Human lives can’t.
That said, I don’t think it’s an *easy* choice at all. The moment you add “all servers,” you’re hinting at huge indirect consequences—hospitals, communications, safety systems. If I had strong reason to believe that destroying the servers would **very likely cause even more deaths shortly after**, then I’d hesitate or even choose not to pull it.
So my answer depends on certainty:
* **Certain deaths now (5 people)** vs **uncertain future harm (servers)** → I pull the lever.
* **Certain deaths now** vs **very likely greater deaths later** → I might not.
The uncomfortable truth of trolley problems is that **any choice leaves blood on your hands**—either by action or by inaction. I’d choose the one where fewer people die *because of me*.
If you’re comfortable sharing: what would **you** do, and what part of the choice feels hardest to you?#
youtube
2026-02-18T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxd3VLlce_fsz1U6MN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpZwqoz_Od7Y0UilB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhMqI3gUSiOPmyPkV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLVvx9_v0rRF-Bz1V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwVGpzsex4V7q3OSFd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-15lARgl9xR6QD9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrZX9yQjlVYKnPp4x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-BPSoEzIFgZJpnxF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxjRbO5KIt6zv4EWsR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwmyj7S5bdNFwkdmiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]