Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My gut tells me this story was written by AI. ChatGPT has begun its own marketin…
rdc_jrppbxj
G
I am not sure how Taiwan thinks about this. Does it makes them in greater dange…
rdc_gt707ue
G
Who cares the AI will do all our work while we sit and chill what are you worrie…
ytc_UgwIn9BWf…
G
Yeah I hate AI art so much. For one thing, I used to really love looking for fan…
ytc_UgwCxQ0hn…
G
One of the things I’m looking for is when AI can automate all these articles and…
rdc_mylztzc
G
So AI Bruhs are the artistic equivalent of Syndrome from The Incredibles.
Also,…
ytr_UgzoQ-H4F…
G
That is partially true. Government wants closed source ai "for your safety" so d…
ytc_UgxuxYewj…
G
What crazy things can humans do if anything that involves a brain can be done be…
ytc_Ugxlu62dp…
Comment
Don’t let fear find its way into your hearts folks ! AI is fascinating yes but never smarter than you, how do we even measure intelligence? does AI have intiuition? compassion ? love ?
I dislike how this talk seemed to diminish the human value in our work and productivity. And then, to conclude that putting an “AI chip” in our brains would make us smarter (coming from someone who works on AI safety ) feels like a huge contradiction to me, almost like pushing a transhumanist agenda.
The mere existence of a technology doesn’t mean it has to be deployed (think of the atomic bomb) !
Can we instead start discussing the thresholds of AI , the risks of making it open source or universally accessible, rather than treating it as trivial? This whole talk feels wrong to me.
It’s time to go back to the source, unite, and believe in our worth again.
Peace and love to everyone!
youtube
AI Governance
2025-09-29T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgykWA3lB38cIndFLfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZ3PugYQnMSbgX2wl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxyKsIZ4buThzXeuNd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnH00noOnOAtqy7wd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrLOdHQuBV194RKPJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_SXpY3UndDZv7kEZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzu4485yNzBwWeJSnp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyGumzb_Hqqi9Wpp614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxR0F8WhUk17hLOOWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoFi7JY9hQ8uKqLxh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]