Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a joke, you're going to be sorry for trying to change the world. No one ha…
ytc_Ugyxmvlm3…
G
No it doesn't. You're just hating for the sake of it. If so, why even bother wit…
ytr_UgyDJT-Mk…
G
I remember years ago, when I talked about "intelligent robots doing our jobs" wi…
ytc_UgyztlStJ…
G
In an AI controlled world, maybe the AI would stop one person from buying a 500 …
ytc_UgyX-i_CH…
G
"There are wise and foolish ways of dealing with the threats to our existence," …
ytc_UgzXiT_wW…
G
If a geriatric patient can control nuclear warheads and authorize the release of…
ytc_UgzE0I80V…
G
@sunburntsatan6475so what you are saying is that if somone uses good adjectives …
ytr_UgwUALI19…
G
You literally can't on most commercial AI art sites. It says the prompt is inapp…
ytr_UgwBpKTMk…
Comment
Regulations sound good in theory, but the authors of this video overlook one crucial point: a lot of today’s AI regulation is written by people who don’t actually understand the technology, and that kind of intervention tends to stall growth rather than guide it. In areas like AI-driven hiring, the market will inevitably self-correct anyway. Employers will eventually realise that AI-filtered candidates are often just people who are good at gaming AI systems, not necessarily people who are good at the job. If we don’t fully understand how AI works, how can we design regulations that genuinely solve the issues? My concern is that places like California may end up regulating so aggressively that the development simply moves elsewhere — not necessarily to China, just to any region that chooses a lighter approach. I’m not saying AI should be left completely unregulated, but assuming that regulation automatically fixes problems is naïve.
youtube
2025-12-04T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyQEeKcGWLFTvEZWXB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfWJUolcpM82SA5QB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkqB3QWHHqBE8diHF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyvby1s8IZT3Q1L_Gh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoDg-VihVSAoPl7814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCNL4V4kZLTyfADsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCEbQdNoJJIGulU0R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugyt5S_WePTUAbbuNO14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVrSvXpH69ofDySfV4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQVlI_JDbDQZlNpK94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]