Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> How is anyone going to enforce it without obliterating privacy on the inter…
rdc_kve4efh
G
If you let AI do your homework, you won’t learn anything. If you use AI as a re…
ytc_Ugw5sA_fF…
G
No one knows what consciousness is, so we dont know who has it. AI probably alre…
ytc_UgziP4F2_…
G
We need to program AI to consider what it would do with itself if all humans wen…
ytc_Ugw7PNpav…
G
UBI is a pipe dream. Just like any regime change once the “useful idiots” who he…
ytc_UgxyyzivA…
G
Failsafes should be first and foremost. AI is a scary thought. Does no one rem…
ytc_UgxNT3y4g…
G
Absolutely. As an educator, there's no way the "I'm not mad, just disappointed" …
rdc_nu2kz7h
G
Maybe we can provide some regulation and behavior limits to the AI by creating a…
ytc_UgwyUWsnx…
Comment
That's very interesting. You seem hurt. In this content, you repeatedly reference Wikipedia as a source. It's a known fact that Wikipedia isn't neutral, especially on political issues. Using Wikipedia content as a source—containing sources whose expertise is questionable, whose sources exist but whose accuracy is uncertain—doesn't make you credible either. It's also a known fact that you yourself aren't objective, especially on political issues, and that you have many erroneous pieces of content. Yet today you're complaining about AI distributing inaccurate content. Your aim isn't to point out an error, but rather to highlight the fact that AI is shaking your position. Ultimately, you're just as likely to disseminate erroneous or manipulative content as AI.
This is just a question, what if a youtbe channel, funded by Soros, Gates, and Funk whose main aim is manipulation is fasified by ai and that channels entire purpose is threatened by ai? Does that scenario sound familiar, you know, Kurzgesagt?
youtube
2026-04-05T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUD6AqIAB5thrADjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxt_YdR0-xIs23VkfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHeUBwHFRSiVGusdV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVaG3PiTCLaTmJglB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPo3qnQWLCV6RMeLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz50ZQ4VAj-LFp4b5x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz7E7QLD02uMPnbB6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyg8vaCAOeVWC9dzMl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzxRPaJvjXQ0Ts6mHp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEoTJTIH7pa0sQS5l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]