Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
None of you guys get it. This Chernobyl event for AI already happened, in the very early 1970s. What is an AI but a computer program system that feeds outputs back into its inputs and uses computer calculational power to produce results no human could easily do? That system's technology was formulated by Jay Forrester of MIT in the 1960s and 1970s, with his system dynamics. A world model was constructed using system dynamics and run with computers of the late 1960s, the answers were generated and written into a book, The Limits of Growth, 1972, and the world instantly listened. (This might have been aided by the oil shocks of the early 1970s.) The results of The Limits of Growth were that not only did human population need to have birth and death rates equalized by 1975, but also that capital also needed to be balanced between creation and depreciation. And that limitless resources would not prevent civilizational collapse by 2100, or possibly even 2050. And so humanity gave up the idea that anywhere close to 3.5 billion people could sustainably be at even 1970s First World living standards. History since then has been humanity fighting itself to either get to or remain at First World standards while ejecting as many competitors as possible off of that standard. Few of us can be honest that what we desire most is for human population to be reduced to below 2 billion. And AI already did this to us.
youtube AI Governance 2025-10-25T22:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzruFdGKogQLIYPCMJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQ40-uhkdC4XbXIoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpOqXoFeh7QJgdnON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwl3SotOIqGBDS4rjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCsTJyFbQ-c0A3Q014AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyrwOAXyC0YQMHXHDh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwM-5qGwns9wZZah3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDE5iB6BuOtLRg2ap4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxclOIgpoUuHPzMei14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxNSXKikXDsV8u3CC54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]