Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nobody saw any AI movies over the last 50 years? Spoiler, it doesn’t end well fo…
ytc_Ugzf_7Fe8…
G
I listened to a very long interview with him and at times he came across like he…
ytc_Ugzw8Zh1u…
G
honestly the more time goes on the less i am worried about ai taking over animat…
ytc_UgypmSQXf…
G
Payed access in terms of open ai premium? Or do i need a third party tool?…
ytr_UgxeWUiR3…
G
AI is not going to be as profitable as all these corporations think it will be.…
ytc_UgxN2jRmj…
G
Nanowrimo taking this stance is crazy considering how much writers in the commun…
ytc_Ugy-4Pppv…
G
Robot at 0:01 : "what is my purpose?"
Me: "you pass butter"
Robot: "Oh.. my god"…
ytc_Ugi9SRakL…
G
This is a great video. It says simple explanation, but it is careful to disting…
ytc_Ugx49j17P…
Comment
Is everybody ignoring the fact that the man asking me questions is not being very stickler on the rules. And also asks leading questions. The one about history was hilarious because that didn’t come from ChatGPT that came from him. ChatGPT just answered the question with a yes or no answer but then not other yes or no answers ChatGPT does not answer with a yes or no, and he doesn’t ask it to restrict itself back to the rules.
youtube
AI Moral Status
2026-01-21T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugwj_Rn2cqZmQalOq2l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1xyE8tuAnFLHwosd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwenxtHTg1YKwwi9-h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyaK9yMEsOeDQ3VReB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxONW0QoxLv2HtWHsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTV3TftqkxpSNtOX54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPfk8R4AgCpP7yVaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWEJB3DaqvWcCW0HB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgIkzFNgw_e77BnCh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBflArN9PAer-xt8h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]