Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Beat me to it. Agreed, the point isn't that the Ai explosion is good, it's to ac…
ytr_Ugz9VHd9L…
G
I'm sure most people would be on board with any common sense, data-grounded regu…
ytr_Ugw94BjwH…
G
No more AI. We need to get rid of it. It’s what is driving up the cost for parts…
ytc_UgxW52qtL…
G
She remarried, and this son was her first and only for a long time. Single mothe…
ytc_Ugx662uYf…
G
If the facial recognition software isn't working properly, then you fix it. What…
ytc_UggQG6eUA…
G
It's a good edition but it's not a real robot, it's just an edition. 😂…
ytc_Ugyw0Cw58…
G
@eagle-el6px while llms are clearly not replacing devs anytime soon, they're mor…
ytr_UgyB6f5np…
G
I think artists might have to go offline and sell physical copies of their work …
ytc_UgwT9uT3G…
Comment
Not sure I agree with your way of qualifying what ChatGPT is doing .. you presented a scenario with 2 outcomes where in each scenario a percentage of people survive. You add a novel solution to just let god sort it out, but this involves ignoring the problem and removing the pre-requirements from the scenario to 'make it go away'. It's not solved and would require murdering people. While pulling a lever results in death, it's not entirely the same as ignoring the problem and killing all involved to sidestep the choice.
youtube
2025-10-28T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwkNdWV0_KsRXHwRS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwtYyY4jPpGiaI1HZB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx0Dvla7O7S-GH_RWJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgytyGxn7vRJ3OTcmfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytc_UgwWHtOHmb-vM77eOkF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-7zdwhGMDXwiS75x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwHEHoGwvUGhkP-gox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz_NSWmET6YAq3WVdd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzqC6Bx7NMRa45wgNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugx56pf03FabIuxgtLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]