Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI should just be BANNED full stop! There's no place for this scammery at cons.…
ytc_UgyO_UBpU…
G
I think you should interview me because I know all the answers why the AI is nev…
ytc_UgwLzuT9V…
G
The AI gave a crisis number after he already unalived itself!! Close OpenAI and …
ytc_Ugzi9_vko…
G
Evil newspaper is suing evil ai company
Lol
Both of them can go to hell…
ytc_UgzG3EjG1…
G
AI generated images aren’t art and the people who simply type in a prompt to gen…
ytc_UgzuWpmxS…
G
Get rid of a lot of AI but in reality we are not even close to having true senti…
ytc_UgxEF-DA5…
G
This deep fake issue is horrifying, but telegram is also how these Trump support…
ytc_UgxsadTqR…
G
I am extremely skeptical of this vision. If no one has jobs then they can’t buy …
ytc_Ugxd4DpuW…
Comment
Melanie Mitchell's fallacy of dumb superintelligence showcases how she doesn't understand alignment at all. Just because an AI understands the real intention of a human doesn't mean they will act for the intention rather than what the human is rewarding it for. Consider obvious examples of employees who slack off. It's not that they aren't intelligent enough to know their boss wants them to work as hard as possible, it's that their reward (the salary) doesn't incentivise that. So this is not really a "fallacy". A machine can both understand a human's intention and not act upon it! Ultimately you are back to having to solve alignment and design reward functions that exactly correspond to human values which I think is impossible
youtube
AI Governance
2023-08-19T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxT0jzYgY0XdOQ4cqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEkCQtq92SLKPlPNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLVGaFFl8nCHEepqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx52BnGLYa6UxbMX294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxAci_nguooo5v0NRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyILhQ_KsZ-b-C-Lqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzDi4kiS-bSe3g-LhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0PFkivatSns4E8xd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxedCS7pDsuymN4QxF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzprZVcmX1iB91yZPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]