Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder, would it be possible to create an AI which can have subjective experie…
ytc_UgxLcR5sB…
G
This happened because AI is allowed to train itself. It will support a person in…
ytc_UgykYrYe1…
G
Yeah, AI is a big worry. But. I graduated college three years ago now with a deg…
ytc_Ugx6tzpf0…
G
Don’t trust AI in the hands of the Americans, being an enemy of the United State…
ytc_UgwyCy99C…
G
I don’t know why this guy is asking, “what are we going to do for 60 hours a wee…
ytc_UgyLQCwQo…
G
It's OK, Aussie. Canada is right there with you.
And at least you beat Japan, S…
rdc_da4aplh
G
Yay. I'm alive. Let's go. Oh no. I'm touched. I'm gonna die. Oh no I can't die I…
ytc_UgwQVMP35…
G
I used open AI 2 days ago and it told me that one minute was less than 17 second…
ytc_UgzZ9qYof…
Comment
Did nobody read Asimov?
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Harm Incident
2025-07-24T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHN0aHaovq7eeuyCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzXGDEgrTHvwu0uLmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVgOy5YXG04NKcc954AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxkKOodE7wVmT03RJV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAmjXa6TmFc0mYUnJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDnmttbqB9oF5m0uh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcukoamWoMCCWN0Eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLYHV3zssnKAr7Kyt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx1cjMRO8w7VjwC8T54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsjlRRJmn5aUpBubd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]