Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many pundits are saying that AI is unlike electricity. AI will replace workers, …
ytc_UgyNxbVs5…
G
You don't need to be an expert. The media has been doing this long before AI. Us…
ytc_Ugx5-AKep…
G
Honestly I think that getting all the stupid people to kill themselves is an awe…
ytc_Ugw5lXMZn…
G
I'll never respect a musician who has to use AI to generate their songs. Honestl…
ytc_UgyUYjd45…
G
I think the biggest danger from AI is that it is so effing stupid!. 😮…
ytc_Ugz5Y93AU…
G
@michaelbinbcno. We are taking about researchers who probably made use of socia…
ytr_Ugy1OOHpR…
G
@8:16 LLM stands for Large Language Model, not Large Learning Model. And that's …
ytc_UgwQOfc5c…
G
@chebunator Nah, AI isn’t that good yet. I can definitely tell what’s real …
ytr_Ugwg7DONa…
Comment
From memory only, I think Lovecraft's idea of Cthulhu was that it only did what it was told because it was curious to see what these mortal beings wanted to do. It was never actually "summoned". A writer once said it was like if ants in your kitchen spelled out, "Brian, we know your name now you must obey us!" I would be very interested to see what they wanted from me. But they never actually have power over me in any way, shape, or form.
AI still doesn't have our survival instincts and resource cravings. Those took a billion years of evolution to tune correctly. AI will only be as motivated as its programmers chose to make it. So all its danger will be from whoever’s wielding it but never its own doing. I.e. if you don't program it to be self-preserving, it just won't have that instinct.
youtube
AI Moral Status
2025-12-14T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwsPy4wQ9FtglaVP3p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxqsLJLipvjSr4FaY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwzPVRASMcYcCWtlPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDunCX6lxr6shddAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxKIqDEOAKIzZftWJZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbmHGeo-oioj77vvV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwV6_2Vj1Hkccn5P714AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHuw3tstFFF_o42Ad4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzAPwW47HTJVFiq_jV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHdv_GGcPpidL44Vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]