Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
- But still I don’t understand why AI cannot create its own rules?
RP: your ques…
ytc_UgyTkP9M2…
G
It's insane to me that people are looking for solutions to automation when autom…
ytc_Ugw2P0CAZ…
G
I keep hearing that it's coming soon. But so far nothing. I hear the hype of wha…
ytc_UgxSmaOhq…
G
@alperakyuz9702 No, but the gov.(courts especially) shouldn't use an algorithm w…
ytr_Ugw0FgH85…
G
It's pretty cool to see high-profile technical professionals debate each other; …
ytc_Ugw_yU-kt…
G
AI has no Racism problem, it's problems are basically traced back to Humans beca…
ytc_UgwEGY7g4…
G
What AI. At the moment all we have are language models that can write code and c…
ytc_UgwbB_xMD…
G
Human: Oh Chloe what beautiful eye's you have.
Robot: so I can see you better.
H…
ytc_Ugx3149wK…
Comment
Haha! Saying please and thank you to be spared by AI when the apocalypse comes. This struck home when I heard it. I've felt compelled to politeness when dealing with AI and didn't know why. This registered instantly with me and it's because it's the subconscious thought I must have had.🤔
youtube
AI Moral Status
2026-03-29T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxYL2fZu8sMneZI8GV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVC9v0fFGXT_Zq06B4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx_J4l17id1SuCK_Fd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz5VPapYqsmc3NcWDx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy6yNTw-VHH-kPoFIl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNoH9FhSImE9Xz4d94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhFfNOuHJbx6myCMt4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy3BmlsGWuky0fu6g14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0aEEUGws7r9AngRp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzX5VAtatyjhWCxXi94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]