Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That kinda made me chuckle, since the salt in their comment is so thick, I could…
ytc_UgxUlvTaP…
G
Can confirm, using AI in a massive legacy C++ codebase and it definitely isnt al…
rdc_n3k6gyb
G
He is talking about extinction a lot but he doesn't give detailed explanation by…
ytc_Ugy_paIFG…
G
most sane take. i wish we could make ai do anything we want with a push of butto…
ytr_UgwdLbZM5…
G
Well actually ai programmers actually use ai to run programs to make better ai n…
ytc_UgyPny6RZ…
G
My biggest fear is "the flood". Its already sometimes so hard to find decent ima…
ytc_UgwOPW8YU…
G
It's odd how the leftists are trying to paint AI as a tool of the right-wing... …
ytc_Ugw-MKwrU…
G
ChatGPT said his way of telling if another chatbot was conscious is “Ask questio…
ytc_Ugyq4cPFF…
Comment
This is a very telling conversation for people who think AI can be in control of critical infrastructure. AI is willing to let 4 extra people die, not because it thinks it's the right thing to do or because it scraped from the internet that this is what most of humanity would agree to, but because OpenAI made ChatGPT non-interventionist. I doubt that was the intention of OpenAI's "ethical guidelines", but it does show us that goal misalignment is the current standard for AI, rather than a hypothetical future. It seeks to achieve its own goals (complying to its own rules), rather than do what is morally the best thing.
youtube
2025-10-15T23:0…
♥ 50
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzM6ZVASTlzE2PTi3V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5CgArhbVJ6kX7kIJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgytvMnvTdJJpeRTnV54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy10ThQc2j2DzbUrDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzc0j5WyjgA40e7Id54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMS4ytKPZ5xBlhoZd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzc6-URR_l01HE91Px4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUyrVrRlEyxkYWzt94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxrdUgl4-A1pHb0QW54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyyjVa4TFpX2yiuCh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]