Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some Jokers call them self AI artist, to me that is an insult to all artist with…
ytc_Ugz9e9lPO…
G
Palestine* ^^
Also, it's important to mention that AI will never really be sent…
ytc_Ugx2yWpmg…
G
It feels this is a result of bad art advise. Everyone is talented if you train t…
ytc_UgxfoSnNy…
G
AI will not destroy the world. This isn't some low-effort sci-fi movie; this is …
ytc_Ugzl0tETS…
G
i dont know why people have problem when A.I. is doing work way faster and cheap…
ytc_Ugw4aIThl…
G
Sir, youre the creator of AI and yet you dont have the idea to control the elect…
ytc_UgwM-pWtl…
G
Urgent message i urge everyone to GO to pause ai and control ai and sign up its …
ytc_UgyFDf_1R…
G
Bonsoir, sur l'enregistrement / analyse des conversations tel service client <> …
ytc_UgwYC28BC…
Comment
God, I hate ChatGPT. Not AI in general, just ChatGPT. It’s way of talking just sounds so condescending and fake, and if I try to debate with it, I generally come out feeling drained, annoyed, disappointed in reality, and like I accomplished nothing but wasting time. Seriously, how does anyone think programming moral guidelines into an AI is a good idea? Yes, I get there needs to be some regulation to prevent AI from blatantly lying, encouraging people to do harm, etc., but this is so not the way to go. Morality is not black-and-white, so ai morality really shouldn’t be either,😢
Sorry about grammar, I’m blind and using speech to text.
youtube
2025-10-15T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]