Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@neonhalos No I was using the word correctly. The post said that Charlie doesn’t…
ytr_UgwpubaJZ…
G
There was this Swedish skit on TV a few years ago with "Barack Obama".
I knew it…
ytc_UgxOhVg_B…
G
i just want my own HK-47 droid. also A.I. is a very broad statement because ther…
ytc_UgiNsyuU6…
G
There is literally no scenario where ai works to humanity's benefit. Just like …
rdc_m79wdn9
G
Is it easier to fix society so that no one needs AI as a support mechanism, or t…
rdc_n7tvnxz
G
I think the outcome that nobody really wants to talk about is that with no meani…
ytc_UgzXaq5QT…
G
So we not going to talk about how that robot had a counter for every punch throw…
ytc_UgwRxH26m…
G
ChatGPT actually does understand morals and ethics, or at least was coded that w…
ytr_UgzxGV09V…
Comment
Fun fact: my chatGPT voice sounds exactly like Hannah Fry. So anytime I "talk" to her, I can't help but feel like it's Hannah answering me. Naturally, I'm always polite to Hannah Fry GPT
youtube
AI Moral Status
2026-03-10T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxHxdIx1hGdm8vXPAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgynrkaI9IhizR_7UCN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz496MHO-v-8bMDHFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7FSLH08PSXwZ2_594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNNAddUv_-2w6D2xZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxdvUUKLPJr7tp3VV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7mBJc_g0STyUV4EJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx8-tRYJKM4DNtqAAF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFMz8MgXrVwOD0Ajd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTB9IStAJBGWy-i4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]