Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The 99% to 1% analogy doesn’t make sense. If there’s a 1% chance of dying in a car which there basically is and everyone still gets in a car lol. Nothing is risk free. I think the biggest issue here is nobody can quantify what super intelligence even is bc the definition of it is beyond human intelligence. Ergo how is anyone supposed to create a paper defending themselves and or agi against a theoretical outcome that is beyond our comprehension. I get on his terms we’re talking about control of ai and how one would be able to control that but i think what isn’t articulated or understood enough is 1. How this thing works bc even the scientists don’t understand. I don’t think there’s been an invention in history that nobody can explain or understand how it works. That alone should pause these projects dead in their tracks until someone can at least articulate that portion of this which would curb super intelligence in general until there’s a better understanding of large language models and agi agents.
youtube AI Governance 2025-09-06T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxIlBvJqBMWtJCXAZx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxA4vG-8qFHXlE0Kyd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzoZQXlVp7yvlJM1yt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0C40KlW32km93OD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy9zz4yehjdNC4sl3l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz5SvSiTgZzhzeht0p4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHzmczQ8WOZwPBdOp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgynSzu8WYrwjo8LqiV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZvm09EBwjLWK2bwh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy5GmHB8PDEW-7BRL54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]