Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As far as I can see there are only three possible stable outcomes. The first is the end of humanity, the second is a global dictatorship with complete control which for instance includes control of hardware manufacturing machinery allowing for the control of development of AI (perhaps the most likely outcome in which we are still around), and the third option which is less likely but probably the most preferable is a global autonomous self correcting system designed to benefit humanity. (Similar to what the Intelligent Internet is designed to be). Such a system would have to collectively be more intelligent than any individual intelligence that may appear here or there and such a system would be the autonomous policing system of the intelligence that emerges.
youtube AI Governance 2025-09-07T06:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxvswVxWkAJbEYTkIt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTR40x1QL81T9hvY54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYAADVgKOekgD3YhB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFT0pVrxLQ7qu5H2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTQTR9uD3IgUAlwk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz4rp2Mrvoz1S85XPF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwghzau0JATrN6FaB54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyEEdnRXq5Fb44jfPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxLm_5TEdxdbn4tD054AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwa5DjTVRkdtMO9h2d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]