Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the meantime, the first known major cyber hack using AI just occurred using l…
ytc_UgyNriS6V…
G
If this "intelligence" is so effing smart and intuitive then why not ask it how …
ytc_Ugx2Cf5WS…
G
Im a user of chatgpt but i humanize it to Undetectable AI because of accuracy…
ytc_Ugz43y33D…
G
Why does Alex's ChatGPT sound like some trendy pop radio host while she's discus…
ytc_Ugxfz9VGF…
G
I think my opinion is really based on 2 things: knowing that something is AI art…
ytc_UgxfO3oM7…
G
I'd say take a gander at this: https://www.youtube.com/watch?v=iBouACLc-hw. You …
ytc_UgzUcqlWI…
G
Nah that's actually so sad but ai is getting more emotional and ai is getting em…
ytc_UgyPrsmxp…
G
the hype around the so called "ai", i.e. machine learning, especially neural net…
ytc_UgwSYEPL_…
Comment
I'm confused. AI will flag and automatically block output when someone is trying to generate nude images of people. Why doesn't it do this when people try to use all the personal data of every citizen of the United States to manipulate elections, like Elon/DOGE? Or red flag anyone who creates recipes for viruses? Or design more powerful, more sophisticated, and more accurate weapons?
Seems to me that the refusal to regulate AI is the immediate problem, which after learning the philosophies of the main people behind AI, without regulation and oversight we truly are f*cked.
youtube
AI Governance
2025-10-13T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgweLt_o0GbCQAxYiQ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRbJSe44PnzQ-YBZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"confusion"},
{"id":"ytc_UgxI60PujbtZUEEc2rF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxUtyxsZiHzGXjeGMR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzG4HvsYbQEqox3HBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzCb5f8N1oY9ZHm-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGqTZXGuzYh82bCgp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhjVw1Bs9oZwib72p4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz6-zQN6shbP1f2ZLJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8jOeUJrWjggxYhxZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]