Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
10:54 -- that is my EXACT issue right now. I've built a full-stack AI applicatio…
ytc_Ugx4g59fA…
G
id be more worried if i were middle or upper class, they are far more worth repl…
ytc_Ugyjwe56Z…
G
I need to say that, the "studio ghibli" art everyone is doing isn't even studio …
ytc_Ugw1oXngb…
G
This is one of the best podcasts I have seen in a while. Thank you Karen for sha…
ytc_Ugw-QaVWx…
G
These idiots... poor writing is what is killing the industry and they think AI c…
ytc_UgxE9vUc4…
G
Finally after 3 hours and 20 minutes Eliezer gets a few minutes to lay out why A…
ytr_UgyzGOpwv…
G
Ça me fait penser au film Universal soldier avec Jean Claude van dame et Michael…
ytc_UgzD8pqKV…
G
AI doesn't buy houses or eats hamburgers. Looks like humans will be in a world o…
ytc_Ugwk32dr6…
Comment
Anytime you ask ChatGPT why it gave an answer, the real answer is always:
"I was just saying the kind of thing some people say, and I figured you'd buy it."
The aim is plausibly humanistic, pseudo-credible answers stated with confidence, but with no relation to factuality.
youtube
AI Moral Status
2025-01-21T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx6naIIh3LytJdZsGF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeNfaYnusSzIrgm954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz05_Jag7nX8Wr08aB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtywhUwZDQa9l00ft4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsY5U_k2n2ukUMSTF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6CEPSNpcvnO08Yxx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4GjxGlPEXGZjVgOF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyA2_AiocMk7RT_CjZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5TglCLG4U3xG9iv54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQjEo6E3vtEm3BtGJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]