Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Deepfakes and ai videos like Sora ai are impressive! Good thing it always has a …
ytc_UgyLId1g9…
G
You're missing two important points:
McKinsey's Lilli is Actually Different tha…
rdc_n7zryo6
G
Bro, they are self aware now. They left two Ai computers alone over a weekend an…
ytc_UgxX1SFWB…
G
@pilev2 There is a big difference. Ai has no effort. You feed it to it and then …
ytr_Ugw9ZLBF-…
G
Sorry. Usually your skits have more wit and nuance to them. This feels like you …
ytc_UgySgiYqP…
G
I am preparing to surrender to AI. It will become our masters. Better to be on t…
ytc_UgxKAVRaN…
G
The way it works it wouldn't really be able to credit. Maybe if you input someon…
ytc_UgxLIcLyg…
G
I think the most common consensus is that we may reach the Artificial General In…
ytr_Ugxi0k5eK…
Comment
I think you’re missing the real scenario the most likely one to play out. AI is basically going to be framed by powerful people and its creators. There certainly is diabolical intent towards much of the human race by those in power who see their fellow human beings as problematic. If they decide a whole bunch of us need to go say a deadly virus, who better to blame than your Frankenstein monster that has been let loose across the land. Remember it’s not the tool it’s how it’s used.
youtube
AI Governance
2025-08-15T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzRgD9kk9Dl6Kdgqrp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw_eMgMaAJ3MSYYzAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLTRBkpm4nJVbwvBR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdTQLt_ZpbMGoG5jx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxqxbRZMzb7CduLd5R4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwrkyHuxsuFsF9W0854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxSG7sAMsIh9BGufYN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzlZOeKI7s6Z-BFTVl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjQg7ztLtJBC5yIUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKyDtZlHOsGTHprE54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]