Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m just a private citizen, not a tech insider, but I want to add one thought after listening to this episode: Even if we put the simulation hypothesis to the side, we should still ask: why do we actually need more intelligence than what we’ve already developed up to this point in time. With today’s AI, we already have the ability to model climate systems, optimize energy use, discover new drugs, manage resources, and improve logistics for food, energy sources, and infrastructure. These are the exact areas where humanity is struggling — pollution, food security, health, energy, and livable cities. These issues are common to every country; whether it’s the U.S., China, Japan, South Korea, Taiwan, India, or anywhere else in the world. The real danger isn’t a lack of intelligence to solve these problems. It’s the arms race. If the U.S. thinks China will climb higher, it feels compelled to climb too and so on. That’s why I think the wiser path is this: pause here, stop, focus, and prove we can work together to apply what we already have. Once we’ve made measurable progress and when the world is ready to move together then we can consider advancing as one world toward the higher levels of intelligence. For now, it is like calling for a global pause on further development: a chance to clean up our planet and give humanity the breathing room to adapt, while also realizing the benefits of advanced AI and all that it might offer to us.
youtube AI Governance 2025-09-04T20:5… ♥ 15
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxcu6FP6Kx-7c-SlXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2U1NFze0OLvEnjMl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxq46NjqGOFizrxCUZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyc1TRrXsKomgpCzsd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwA6_vx1AmU23MfIiR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwKr0KlEJ1DHqnK-Pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyaNSX8RobzGM2TmH14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZijLszbN3bSoiH0t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTI_xB19kZSrBPKNZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8LFwovXu8LNVj05F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]