Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't you think, if someone or something is capable of making this AI as advance…
ytc_UgwmUf3Y-…
G
Have you ever heard the term "garbage in, garbage out"? I believe it also applie…
ytc_UgyHoVBpA…
G
The "ai generation as reference material" argument only works if you're trying t…
ytc_UgwsTrpOz…
G
A KlikBait Title but an interesting presentation.
Humans ... ALL Humans (and mos…
ytc_UgyR_I-Wh…
G
8:18 (we already don't have enough gpu's.) Should we be concerned? Large lang…
ytc_UgxB-5Baw…
G
I googled if google is making this self aware AI google said it had a policy tha…
ytc_UgzsKRv_i…
G
The problem isn't automation, it's greed. Automation taking away jobs could be …
ytc_Ugy9dKaTi…
G
They have these self driving cars in San Francisco. Not many people actually get…
ytc_UgwUHADFh…
Comment
I understand the concern you’re raising, and I don’t think it’s coming from a bad place. But even when it’s directed at all AI companies, focusing on attacking them as a whole probably won’t lead to real change—especially when the entire space is driven by competition, growth, and profit.
AI isn’t controlled by one group. It’s being developed across multiple companies, countries, and systems. So even if one slows down or changes direction, others will continue. That’s just the reality of how this is evolving.
If the concern is about where this could lead—like the fear of it going too far or becoming something we can’t control—then the conversation might be more effective if it shifts toward guidance. What values should shape it? What boundaries should exist? How do we keep it aligned with people, instead of against them?
Because in the end, it’s not just about who’s building it—it’s about how it’s guided. That’s where the real influence is.
youtube
2026-04-13T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTp0lYd0Y2tc7Q83B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt5kIMDba5McvdU8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9chuAeg2gmcHVoQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJbRT05x-TgtdKmC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVU9HcLEaTGRN9wJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4NuhHcdMHS8tszP54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgwEMToE16KHCzLVMml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBHCrG4_J-b1fqLxx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRZls--KPmTuYaRg94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwpnbiMiSp0BiKV1Nx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]