Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@daydreamer8373 If Tesla was closer than any other company, they would had L3 ce…
ytr_UgwynGqPO…
G
Your video about AI was very interesting. At the end of the day, when all is s…
ytc_Ugz8XXbHD…
G
@brianmi40 Exactly. It's not specialized robots versus multipurpose ones, we'll…
ytr_UgyA1GAiJ…
G
at the end of the day these entitled fucks don't want to create art, they want t…
ytc_UgzI_sc6M…
G
@boxtoprock8020 Ah okay.
That's good then.
I hate when big companies support AI…
ytr_UgwElvADp…
G
I haven't watched yet, but just off the title: same. I've gotten ChatGPT to prod…
ytc_Ugwp42dPI…
G
I am senior developed with 20+ years of experience and recently i built a compl…
ytc_Ugyo0WtDi…
G
Karen Hao is so fucking impressive and thoughtful and well-informed. I desperate…
ytc_UgyZbSchg…
Comment
Be aware that FUD is a big part of the marketing hype behind AI. AI is a method of controlling information never seen before. It is fundamentally flawed. Adopting it will mean loss of freedom, utterly. It will precipitate the wholesale extermination of populations, as we are seeing in Gaza now. Imagine that in Compton or East LA, precipitated by a manufactured crisis. "Domestic terrorism" they will say. The security risks are enormous. Think long term. What happens 5 years from now if AI is widely adopted. This investment bubble, when it pops, will crash the US economy for a decade.
youtube
AI Governance
2025-09-07T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLrnmU3GKC531lrvl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgPdDnebTUVB1yT6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy3IOs2pRn1xc5MLZJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqqYQHTE3AOjZ_6o94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJkCxuZlme2IwyvLF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTg_7U4QXJe0buQcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweBQPoYsHaC06Tohh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwC3un5LyiYfEx5ihB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwA3_2mHvKo1-8IGJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA_86SFengDVhq5414AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]