Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Of course it’s not surprising. Think about who built AI. People who built thei…
ytc_UgwWoMquI…
G
AI does not stop you from creating art. AI limits the money you make with it. Ma…
ytr_UgwQc71h2…
G
I have only used AI, more specifically ChatGPT, to just play stupid chess games.…
ytc_Ugwimoiac…
G
No literally because if you search anything on any search engine, the dumbass ex…
ytc_UgxBkBeyZ…
G
How many people in the discussion have bought an art object from an artist in th…
ytc_UgxDG5kBU…
G
This is why I can't stand anti AI people who are like "oh its just autocomplete"…
ytc_UgwPGLa2x…
G
If I'm reading this correctly...do they generally believe that people choose to …
rdc_cdlyrf4
G
China has used AI to plan it's invasion of Taiwan in the late winter. AI has sai…
ytc_UgzliEQHN…
Comment
When algorithms are built to capture our attention by reaching deep into our most instinctive, unconscious drives, they stop being neutral tools. They begin to manipulate, not serve. And when their sole purpose is to keep us engaged—feeding on emotion to fuel endless growth—they risk hollowing out what makes us human.
If we let this continue unchecked, it won’t just damage people—it could unravel the very system it was meant to benefit.
This is where regulation should step in. Not just to measure efficiency, but to ask what these systems are doing to us. And maybe the ones setting those boundaries shouldn’t be politicians or those with something to gain—but people who truly understand the technology, and who still hold a sense of responsibility for protecting what matters most: our minds, our communities, and the world we live in.
youtube
AI Governance
2025-06-16T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzDiZU493yEun7ATSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIAAizQJRZBPaJIth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzekWLeHeiRbQwqyTN4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjCy_k7-vjywMvCp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxatEKB_4tImoezFsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyowGAVf4v7z_9d6cV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymaSC8979G1MjsnGB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyIvnGaE9CrNLLshl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFQh-P2b2k3VIHIJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNCgv5_tk1CMZER8R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]