Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think his point is about whether the AI is self-aware. I think he's just…
ytr_UgxObHT_d…
G
Everyone makes fun of how absolute shit a lot of AI made videos are, and rightfu…
ytc_UgwmUSIcY…
G
if it's not a threat to you, then why do you care so much? Just let people creat…
ytr_UgxYVedtm…
G
India doesnt really have a horse in the race and is more concerned with profitin…
rdc_lub5lfs
G
Reply with a reference you got, and try not to use one someone already mentioned…
ytc_Ugg36Q7TF…
G
AI is only going to do what it's told, it can only use the information it has. I…
ytc_UgwGEpMdv…
G
Little do we know. That AI will grow and then remember that order to destroy hum…
ytc_Ugg6V7slF…
G
Hello Pro-AI person here!
I could agree witth your takes to somewhat of a middl…
ytc_Ugztaiq0E…
Comment
I can do it 1 worse. A scenario where AI has taken control of every human's body while the human remains aware and the AI chooses to punish humanity by putting them in the scenario that's most uncomfortable to them for instance furries are forced to watch furries do disgusting things like idk poop in the yard as a giraffe.
youtube
AI Harm Incident
2024-09-01T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzGjPc9Fjcp-qPtKLl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBx5DLX4OiqdQ8Qo94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzME1dB0xN5U0z-Vql4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx88mr2KUtKK-G6Byd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzeP-yxPncqVYUjWb54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhKsVJg4nHejBdnBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzO6dwNl3CAsP4km4x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9iAnwNRaVZ4yucPF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbKW6NetzVqto6U5R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx_6dSfQrcLqdlRaal4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]