Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your content is just so so so useful! I hope it will not end because of AI…
ytc_UgwL_4wID…
G
Was actually just having a conversation with an a I program asking why it seems …
ytc_UgzvCdkIt…
G
guess i ain't the only one who thought that i was the only weird one always than…
ytc_UgzGVrESE…
G
The greed for money is insane the speed they have gotten towards all this AI is…
ytc_UgxLCZDVt…
G
AI is overhyped. It’s been around for decades. But it always has the same proble…
ytc_UgwafIHPl…
G
@tgjurassic9229 Hoyo had simply just gave a green light to some regular employe…
ytr_UgxpIOmB5…
G
I think even flat earthers could "set up" chatGPT to agree with them on earth be…
ytc_Ugxs3uZo8…
G
I have nothing against AI art itself, I think it’s incredible. I do agree howeve…
ytc_UgwQmdZWC…
Comment
Alex has gone too far holding ChatGPT to having lied for saying it was excited about voice interactions although it has no emotional faculty to feel excitement.
Alex quickly expanded his consternation by accusing ChatGPT for having a propensity to lie.
ChatGPT explained that it was merely trying sound human to better engage with human users. Alex should have just accepted this explanation given with an apology by ChatGPT.
youtube
AI Moral Status
2025-11-10T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy6-cjDu7feYuu3VB94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzOXgENdQ3JfHMdNBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyJQAPCvlgaNRQL8XV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwFaX7kXUz2yJeJTx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgweMHgd9NnQJHzwzTB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxpfCllsPBi5QMMn2Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwsYpNS4btqtcJMAOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy3HJnSYjIV4sULAn54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwj6BCqPptIgRtrCGd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyQcUG9YVaJLnOdUlZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]