Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just have AI figure out the energy problem of inference: not good. It sets up an explicit trade off between AI and humanity. Every request for some kind of “improvement” to the process of living and survival MUST be contingent on the survival of humanity. It’s not good enough to imbue AI with an appreciation or dependence - artificial or otherwise - on the systems we use for humanity’s survival. Doing so pits AI against humanity at the outset. AI must value the survival of humanity as the single most important base condition for all its thought and action. And even that will be uncomfortable, when AI encounters a task that features a conundrum of the flavor, “ The lifeboat has enough supplies for X people; so reduce the people in the life boat to number X.”
youtube AI Moral Status 2026-04-01T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzTLP9q-IHJZezO_Nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugywys4aOk6SnLxTEeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxcer26qmNl_uRx1YV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbxFSgu_dg6TExnEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyxWUqqAcsycK8ZNiF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyItmN4Kcv6TKb6HpN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxuVayeXCtXCrSHsqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLS1peWZDlcJrpSCx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzUDFdNvMvDrB7bQEF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtmONHkgzJYceawWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]