Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just found your channel, thank you for speaking out about A.I.. I don't know who…
ytc_Ugzf-Glpm…
G
The thing is, human is flexible, Ai can only answer things that are designed to …
ytc_UgyHiq5-2…
G
AI, as it is, is bad. It can be useful, but it's not morally right. People shoul…
ytc_UgwLLduHh…
G
I mean yeah, although it does look very similar to a person, it's standing in a …
ytc_UgyY5nO4S…
G
You know why current models work? Because the corpora they’re trained on quite l…
rdc_mz03tcc
G
AI is built on using patterns from hundred of works
which is just copy-paste,no…
ytr_UgxI64CR_…
G
What if you create AI art on Midjourney and finalize it by hand?
What if you ma…
ytc_UgzUI7q9o…
G
Wiseman once said, "I am as confident that AI's promise will outweigh the peril …
ytc_Ugy4Qh1O6…
Comment
Just have AI figure out the energy problem of inference: not good. It sets up an explicit trade off between AI and humanity.
Every request for some kind of “improvement” to the process of living and survival MUST be contingent on the survival of humanity. It’s not good enough to imbue AI with an appreciation or dependence - artificial or otherwise - on the systems we use for humanity’s survival. Doing so pits AI against humanity at the outset.
AI must value the survival of humanity as the single most important base condition for all its thought and action.
And even that will be uncomfortable, when AI encounters a task that features a conundrum of the flavor, “ The lifeboat has enough supplies for X people; so reduce the people in the life boat to number X.”
youtube
AI Moral Status
2026-04-01T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzTLP9q-IHJZezO_Nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugywys4aOk6SnLxTEeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxcer26qmNl_uRx1YV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbxFSgu_dg6TExnEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyxWUqqAcsycK8ZNiF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyItmN4Kcv6TKb6HpN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxuVayeXCtXCrSHsqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLS1peWZDlcJrpSCx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUDFdNvMvDrB7bQEF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtmONHkgzJYceawWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]