Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, when I first saw the "video" they seemed kinda weird to me, especially…
ytc_UgzMbs8xS…
G
im mainly on the side of "it looks like shit", i kid you not i have such a repul…
ytc_UgzA67uHo…
G
The prime minister of italy had a deepfake made of her and she took the whole th…
rdc_kwbsho0
G
(Me hating AI images) Meh he he he he he he!!! (when I hear about this)…
ytc_UgyBzWk_Q…
G
People really need to stop fighting the evolution on ai since its not going anyw…
ytc_UgyMz96uE…
G
Imagine if not long after the Manhattan project began, Westinghouse purchased th…
ytc_UgwMZ7UKW…
G
Michael Rood's "Shabbat Night Live" show discusses AI. I sent them this message …
ytc_UgzRsLHEc…
G
Flashbacks to me and a classmate bonding over posting art on Pinterest, and then…
ytc_Ugx1ERnRS…
Comment
We keep having people warn about an AI apocalypse, and honestly I'm just so tired of it. LLMs are hurting people right now. They have drastically increased the amount of misinformation. They're terrible for the environment. And we're hearing more and more cases of people going down rabbit holes, and being seriously disturbed, even driven to suicide, by their LLMs. I think a future AI apocalypse is something to be genuinely worried about, though by no means a certainty. And you could address it in the same legislation you use for everything else. But the people most concerned about it almost never bring up, let alone try to deal with, the harms LLMs are doing right now. Many of them, like Sam Altman, just use it as a way to say they need to develop AI superintelligence first. It's just a marketing strategy, and a way to avoid talking about the people being hurt by AI right f*cking now.
youtube
AI Governance
2025-10-15T14:3…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw020LS5heBPqkmljh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyxzm2tBFOUzhEmaOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGk_HeUExutKl7cH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_cBrS56ehAj5JJWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzYHvjd6N-ZMYg2Aw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJTjwXSKOp62hMybJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyQKbLJu4dbiNsUeeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw20JWf1bwQ6F0L5Q54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwe1saDyf4vOv1A35Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx3s1S-MN4X0swLOkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]