Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
one of my friends had a live background 5 hour vid of their drawing start to fin…
ytc_Ugx9BRnFo…
G
Ever had a conversation with ChatGPT? ChatGPT responses show more empathy than m…
ytr_Ugzr1Mib8…
G
i had a vision of a James Bond type villain held up in a super fortress letting …
ytc_UgxSX3NJ7…
G
ChatGPT is the best No one's gonna buy this nonsense please come up with somethi…
ytc_UgyDgVlqA…
G
I have been not only rebuilding my website but ... what I have done with AI as m…
ytc_UgxnCvh6r…
G
4. Had People Assassinated for Offending Him
Assassinations and Executions: Muha…
ytr_Ugw1Fn8wk…
G
Q1: "Correct me if I'm wrong, but I believe Cybertrucks lack physical mechanical…
ytr_UgyFpUfOT…
G
I know how ClankerGPT managed to recreate one of the arts (hand one): the AI wou…
ytc_Ugz1ytCOZ…
Comment
why tf would you give it the option to essentially decide when it doesn’t want to answer? None of the questions should have triggered the Apple response. The AI seems to have just not wanted to answer. Those questions were framed the exact same way as the other questions.
youtube
AI Moral Status
2025-07-21T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwXSkL2UhU4uJvkZVJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQ-VBYZX1eoq-x5sJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxkaLJxCxmIl_W_yDZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydqVnLOQi6aIOT5el4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxD9BAz9iEUTbZ53wZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQiEX5NxoaiwaJkO94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgybKOvGpnGoHSsWNUR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzaze8nQNzQv6tEEX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwaRz6wFmt73WItXil4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPaQJL8alLkw6W2xt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]