Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2:35 ChatGPT so desperately wanted the focus off this topic that it changed the …
ytc_Ugwc5PvnE…
G
AI cannot turn against us, because it does not have any understanding of ..Well …
ytc_Ugx_zIuAw…
G
1:31:00 This was the original plot of The Matrix. Humanity was used as a compute…
ytc_Ugx1Praam…
G
Using AI to predict future crimes is an extremely dangerous idea. If you give an…
ytc_UgyUT2ve0…
G
I work in the automotive manufacturing industry as one of the guys implementing …
ytc_UgyuCitL1…
G
The risks of AI are daunting! I’ve started using AICarma to track how my brand i…
ytc_UgzHZoETR…
G
A $1000 a month check will not work well for millions of Americans who are now m…
ytc_UgwfRqnN6…
G
Always remember: Seeing less AI generated images does not mean seeing less ai ge…
ytc_UgywO0hn_…
Comment
Something I personally think is scary is the fact that it values giving you what you want over what is right. For example, when using ChatGPT in my university studies, it often suggests "helping" me with inferior study techniques. When I ask it about this, it often admits that it knows its inferior study techniques but also knows that this is what most people want. And that's the problem, it cannot evaluate based on pure ethics, only learnt ethics, it cannot see that one of those things might be amoral. Furthermore, every time I confront it with a mistake, it doesn't accept responsibility but instead tries to trivialize it. ChatGTP is like someone who murders all the neighbors' animals and doesn't really see the problem with that, then, when it's confronted, it goes "You're right. That was very wrong of me; I should have known better. Now let's continue. Do you want me to show you why murdering animals is wrong?"
youtube
AI Moral Status
2025-12-24T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwm8_h2p9LNnmCDEj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzI2bVlKYQMP1ZW_194AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-zG3UoqRsCY7H6mJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-KJsXL8HqlOyNJmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvrCnHEb3ujysE_Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg3zUs-rnpzwByyzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoOQ6HcazP_ip9cO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzXCTNRR8HPXt0DE7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy8W8tQQl8R7qCthJV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPIrgUrwNvkP2-5EZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]