Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Tesla y without steering wheel is not available to the general public and o…
ytc_UgzHQs25b…
G
Yes ! AI tarpits to poison webscrappers ! I'm actually going to work on that as …
ytc_UgyK0U4f7…
G
3 days after the loss a "bug" in ~~skynet's terminator~~ google's self-driving c…
rdc_d0yqg10
G
For f sake🤦🏻🤦🏻🤦🏻 we dont need ai to steal our means to make money and pay bills.…
ytc_UgxmYM7qx…
G
As an artist that’s addicted to AI chatbots, AI can be a good and bad thing depe…
ytr_UgwmS_42P…
G
@AnyaGraves
I'm outraged about the theft. Artists should be paid fairly for t…
ytr_Ugyw0HwNv…
G
Impressed by Dario's ethics and moral compass. I hope whoever took over cares as…
ytc_UgxIjc0JM…
G
I'm all for using AI for the good of mankind, for as long as "it" is an "it" and…
ytc_Ugy4UcZOd…
Comment
This paradox is a lot like the classic dilemma: 'If your mother and your partner both fell into the water and you could only save one, who would you choose?' I put this to five different AIs - ChatGPT, Gemini, DeepSeek, Perplexity, and Copilot—and the results were all over the place. It just goes to show that making a call in these situations is incredibly tough and always involves difficult trade-offs and sacrifices.
youtube
2026-04-12T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxWtSOFxQiWNXUQQPN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz971aZ4BLMj54OHDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyZ8pLp82guKOmMQIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzqFm_4VyepOnvKHQl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYvtMlDmFOaLVxuyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwllwvHzYedx_ySotR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgycfA482ky5yE8xL4N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxC39hwrpHUe8beWTB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgyrK1hucmSTOvuOs014AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxLU_awGyZCTZxuF5V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]