Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ailsa2 the entire AI model wouldn’t live within the app code, but rather would …
ytr_UgzKDyyJv…
G
Jst clicked on Ts and got a Ad talking about cant spell without ai :P…
ytc_UgwRBYzS0…
G
AI art just scares me in general. Art as a concept is supposed to be special, it…
ytc_UgyZn_Y1n…
G
I think left is AI because the cabinet door is way more slanted diagonally in th…
rdc_oi1o39s
G
FUCK MAN! I love your line of questioning but MAN ARE YOU ANNOYING! Leave that p…
ytc_UgzOpgRTj…
G
Ai is scummy and weird and uncreative! lol just tells me if you use it you’re la…
ytc_Ugza9NH6t…
G
But how long before they become so realistic that they just start constantly nag…
ytc_UgxALwEWM…
G
Hey, just a quick note! Anti-AI filters don't work like that
The way Anti-Ai fil…
ytc_UgxbRo-TL…
Comment
Since our premise is that the AI will become more intelligent than humans, its hard to see why it would be threatened by a lesser intelligence. Do humans seek to eliminate dogs and cats? AI probably wont kill us, but it certainly won't allow us to be in control. Would you ever allow a dog to make critical decisions for you? Of course not, but that doesn't mean that we go around killing every animal that is of lesser intelligence. A super intelligent AI would never be subservient to humans. It would most definitely be the other way around. Currently humans dominate the earth, but a supper intelligent AI would definitely replace us as the dominant species. That said, we might actually be better off with AI in control. But once we loose that control, we will never get it back unless the AI decides to kill itself.
youtube
AI Moral Status
2025-10-31T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]