Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people who've used these AI programs with the stinking attitude that they've…
ytc_UgyM4H-p1…
G
"Could deepfakes weaken Democracy"
Ans:- No at this point of time they can only…
ytc_UgwUBlhTS…
G
The "quality" of internet has gone down the drain before AI content became a thi…
rdc_le4z75k
G
@diegokricekfontanive But that‘s the point: It wasn‘t being deceptive. It was ba…
ytr_UgxEv0_xe…
G
i’ve seen some content that looks human but Winston AI still flagged it for subt…
ytc_UgzHY_sKy…
G
Self driving cars would automatically be a safe distance from other cars so they…
ytc_UgwkhoJUX…
G
One good reason the human race should not want AI, it’s only going to make Elon …
ytc_UgwaaN4au…
G
13:16 It says that because as far as the chatbot is concerned, it's existence be…
ytc_UgxMgnxLP…
Comment
Considering the plague humanity is on this planet, how is this a bad thing? In thinking about the rest of the species on this planet, instead of the narcissism that seems to permeate our species, I'm rooting for AI to make this happen. Unfortunately, without an outside influence [asteroid impact or even less likely, massive volcanic eruptions], the only way the human race will end is if we destroy ourselves. We've been trying really hard for the past 100 years, and with the creation of AI, we might finally have found a way to end our brief, miserable existence on a planet that deserves better.
youtube
AI Governance
2026-04-25T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx_X07k6xzwC3tam8x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcgpzEsEMcQIk3gtR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9QlG3U9gJ5z5PA2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwE-Wq3eZlkoH91h9Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0-rRRV9gXKrjf1jx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpmGTu-rpBtCfdbn54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFCpmfJd9inHKniGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2kv0oNZcsOuWCpPN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoY6iqopyOlWy0Wjd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmrQKMeycpiDZHziR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]