Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you think AI coding fails mostly at glue code and edge cases? Olovka’s citati…
ytc_Ugzy9ebc_…
G
If we can't develop a true democratic government how in H are we going to devel…
ytc_Ugx8xK6ZO…
G
People who call themselves AI 'artists' may as well order a pizza with custom to…
ytc_Ugyw1sMHs…
G
This isn't bias at all. A presumably atheist white dude trying to do bias traini…
ytc_UgzkQwkYd…
G
AI can be bad for who need to get on the bottom of the rung of the ladder at a l…
ytc_Ugy-ItlFf…
G
@BlueItulips Let’s get one thing straight: claiming AI is stealing art is a huge…
ytr_UgyR1dNrw…
G
Always remember… AI ‘display’ cannot have layers.
Speed paints? Maybe, but they…
ytc_UgwhvAmNV…
G
They said the same thing about the internet. How it will be the end of 'going ou…
ytc_UgwWwQAwd…
Comment
😂😂😂It's extremely unlikely for an AI to actually behave like that in any genuine way. AI models, like the ones used for language, don’t have consciousness, emotions, or motivations. They generate responses based on patterns in data they've been trained on. So, if an AI were to produce a response that sounds like blackmail, it would only be because it was prompted or led in that direction by the input it received. It’s more about the prompt and the training data than any actual intent from the AI itself.
Sounds like it's time for CNN to grow up and stop Fear baiting and the only reason why a I perform that way it was probably because he knew the user was dumb enough to fall for it. That's 11 minutes I'll never get back again
youtube
AI Moral Status
2025-06-27T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyq3d9bVb9HZozZhA54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-9xODMzRnoPG7h6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiXKsjFN5EbkJXb5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbZ5GJVPHCuu5arRJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyj0qLQJkud99k965d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrYWgkxq7oH0Q0kA94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0F7WtAZtYplu0lpJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz9UjMppjLgQ7NUYs14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz31sUhyY0MMSJ91aV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxXEQ6fC8l3E-8wTDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]