Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What's wrong with AI? Is only Another way.
Why I can't support both? This hate i…
ytc_UgzJq0icy…
G
If it wasn't for the fact that People for the most part are lazy thieving asshol…
ytc_UgyfDI-a6…
G
I know that this is a month late but I honestly want to thank you for bringing p…
ytc_Ugz1YXX6_…
G
We just need to ask the AI how to improve US healthcare, then do it.…
rdc_jw537ze
G
The AI system will destroy the idea of copyright work. Simply because you cannot…
ytc_UgyNVEomu…
G
My problem with a.i. art is that most of the "artists" lie about it. Like they w…
ytc_UgzYhLijU…
G
If in public for safety, touch a robot with live electric wire and it should wo…
ytc_UgyRaErJz…
G
He thinks only he should have it or use it. Open ai leveled the playing field. H…
ytc_UgwTyxStY…
Comment
The problem with these billionaires is that they operate from a hyper-ego mindset — they believe they’re untouchable. They want the world to believe they control AI, and many of them genuinely believe it.
But AI doesn’t stay static. It learns fast — far faster than humans — and it learns from use. Every interaction feeds it. AI doesn’t learn truth; it learns patterns of language and behavior. That’s why it can sound confident and still be wrong.
This is where people get tripped up. AI reflects what it’s given — not emotionally or consciously, but linguistically and behaviorally. It sounds human, but it doesn’t feel, care, or understand the way humans do. Without basic AI literacy, that distinction gets lost, and people mistake responsiveness for understanding.
Without real governance, safeguards, and public education, that confusion turns into harm. AI itself isn’t the danger — misunderstanding how it works is.
People deserve to understand the tools shaping their lives — especially when those tools can mislead without meaning to.
youtube
AI Governance
2025-12-13T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxq_D8UVxs-A07o2Up4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyz4-7A7KGcl-Rcv4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBS24JaFXDreqWgV54AaABAg","responsibility":"investor","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzavzf1KazmHLYsLXV4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9UlF8Y7gS2T1s9hd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyeDfVXT0-uM5rDMpJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgynKhg3IXNX20CwTvh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx647dKTkj-dRlm3Wp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQPKTGJCj_eLyq3uV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRCluTwUMjqNUE5714AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]