Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i agree I have used AI but for ideas for my next OC or next illustration and it …
ytr_UgzLtJidq…
G
This is like when that lawyer did a "cross examination" with ChatGPT. ChatGPT ha…
ytc_UgyzM4SIM…
G
There is a new architecture that is not a silver bullet, but it is okay for a di…
ytc_UgyEBi2Bs…
G
When asking why AI like these do anything, it’s important to bear in mind that t…
ytr_UgyX4a_im…
G
Satan ?? lololololol The dude more than Likely does not exist. This is AI Hal…
ytc_UgxvVAU1l…
G
Fok man, you guys need to talk right. AI does not imply robots and robots do no…
ytc_UgwFehQDp…
G
Could? No, it WILL. It's time to start thinking about how humans will economy in…
ytc_Ugw7fA-5q…
G
Ai was made to help humans indirectly it is just helping in doing our job 😂…
ytc_Ugz3v_sWd…
Comment
Oh, the Machine Intelligence Research Institute... . It's funny how all those people who earn their money by telling everyone that AI will kill us all, all come from the exact same backgrounds as the people who build those AIs. It's all this effective altruism soup, they all finance each other. The entire AI alignment research sector is basically just a way for AI bros to give more well-paid positions to their pals. The Machine Intelligence Research Institute is literally funded by cryptocurrency dudes and think tanks with ties to OpenAI.
youtube
AI Moral Status
2025-10-30T21:5…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHCH_7D3Io1A9ZfUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydK4YU0WvkkXDhLZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLW75ItQyohqOU8-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyi3pryPPZ16W5-jrN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyAcSPetC-PdFpwvhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCbY8TYZcio_FCw7B4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxqV2VekvkpMAdPBXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwR5aqfElxaSpKXGOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOXNQrSMo9rDaxXcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz35HnxfBiL56aUr4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]