Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The algorithm just favors something that can post real fast and lots of "content…
ytc_UgwT1iuHU…
G
that's not really a pro-AI argument - it's more a deflective argument, they're d…
ytc_UgzE9QL7i…
G
Jimmy, last-mile door-delivery is probably a relatively tiny chunk of the overal…
ytc_UghlK1xDQ…
G
People who still think AI will be a 1:1 replacement are just coping. Engineers w…
rdc_mp39ult
G
Even if we destroy the robot, the AI will definitely able to transfer itself to …
ytr_UgwBjQSuc…
G
AI will be the death of mankind, that's if THE MOST HIGH (GOD) don't come and de…
ytc_UgzAulshc…
G
Well the training juniors days were long gone even before agentic ai thanks to p…
ytc_UgwZN1r_B…
G
Become better writers. Right now, I'd rather scripts be written by AI to compare…
ytc_UgzJ1eWRc…
Comment
ChatGPT is apologizing because (a) it's programmed to and (b) it's a polite thing to do that makes people feel better. Moreover, we can interpret its apology as merely an admission of fallibility and error, which is not in any way dishonest. Finally, there need not be any emotion or consciousness behind an apology for it to be honest. You might want to consult with an AI expert on this if you want to understand more. If you can't find one then I (PhD in computer engineering) could make an attempt to stand in for one.
youtube
AI Moral Status
2024-08-08T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKfivHWSk0Dwdd_1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5oVq_dnYTOV5GvwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_I89CCuX4lbHiYwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1FZyaz01FfMHXs0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfMrz-hb4gOV-2xKd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzGp1kL-p4RD6ag_VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugznxqv7m5QAnbnbj2Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwB7BdoPtrsLRjoXJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwwBc0zawLvxRx60Tp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwrQIzG6DaFPe13nzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]