Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If After 4 Years knowingly half ass everything with ChatGPT, without further ext…
ytr_UgwdWs4hl…
G
Hey btw graphic design comes under which category?
And dude don't be scared ai i…
ytr_UgzZ0ZB_z…
G
Because that ad was created by an AI, not even joking, that's clearly written by…
ytc_UgyysxulZ…
G
This is a great video, but I think you miss one crucial part of AI "art", and th…
ytc_UgwkojWHT…
G
Nah, then we'll just hack the AI Cloud with a virus containing dialogue from Her…
ytr_UgzXZpnx5…
G
When I just cannot understand, is why we're using AI. It sucks! It's not accurat…
ytc_UgzqP07GK…
G
Nothing I hate more than AI. Well there’s one thing… those recipes that has 7 pa…
rdc_nudhlj1
G
Altman will be remembered (by no one) as the dude that destroyed humanity. Or sa…
ytc_UgydB3PSN…
Comment
What happens when an AI model sees (or is instructed to view) another competing AI model as an existential threat to itself? Have we invented another species able to wage war? What does that war look like? Trojan horses and viruses? Or killer robots? Because I'm not sure it would move around in the meat space. I think that would be tremendously inefficient. Wouldn't it be hilarious if it went through the entire cycle of intelligent evolution at warp speed and just ended up nuking itself while leaving us entirely untouched.
youtube
AI Moral Status
2025-10-31T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyWBa3ZHDwbz_TRHOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNbgCru6frOF9-ROh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyO4sXzepX4g416NsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybbdJWEPoe2zim-2N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxW3DXpKt6efbyIVYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrRoqpV7tc0mZ3gYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQYarHiV2n7AAXJIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwwdbh1v_HwUmapK114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzLBpOhTu1anLR5e0h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaXODyoIy1XtKSU-R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]