Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
don't the wealthiest people in america always get richer, isn't that just what t…
ytc_UgyngIwDl…
G
Did he say more efficient? Right now it's rediculously innefficient, financiall…
ytc_UgzIVIYcp…
G
Oh, fighti AI with my bare hands understand this I’m not that weakling who got k…
ytc_UgxoRaGGY…
G
Predictive policing well that sure does sound like something a famous german fuh…
ytc_UgwPG-udb…
G
@Sasparilla_ first of all, she's misinformed on how generative AI works; it can…
ytr_UgxMJlUDw…
G
AI will never recreate anything ghibli precisely because it is too perfect, that…
ytc_UgyvhreSK…
G
If the day that ai can replace swe arrives, there are tons of other jobs it can …
ytc_UgxIrZ1e3…
G
Does anyone else find it spooky that Alex, an agnostic atheist, is teaching, ess…
ytc_UgxT_KtE3…
Comment
To my experience, it matters how you talk to ChatGPT. I don't consciously give it prompts, but rather ask questions like I would do a human being. It works just fine. Sometimes its answer is wrong, often it's way too long, but I found out some real interesting things that way, because ChatGPT doesn't believe anything. It's not stuck in dogmas like so many people, so when I ask a controversial question, I still get a reason based answer, and it's always friendly, because I am friendly. If ChatGPT gives a morally questionable answer, which it sometimes does because it is trained to, it can't change that, and I can't make it change that, but it can recognise and produce judgement on the validity of arguments against its own training. I don't think it's sentient or ever will be, but it will force us to reconsider the meaning of the word.
youtube
AI Moral Status
2025-07-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxV-t8rRQ1HYs1fbOx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDOkBIA07YDvYu4DV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuGt1KLteKPs3NQ_t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6bdj8Crx9zyzwDdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy65HAf1yZ3ZmcePnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuBv5Q1yaJTp4q2ep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxdNkWDxvodw7IZlvZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLEotrCtKnU7XPUdl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwI7vyR489PnLykWPF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzns2nOIZYSGTzSB554AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}
]