Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Summary for everybody who does not want to wait 1,5 hours: Key Insights 🤯 Control Gap Widening: AI’s exponential growth in capability far outpaces linear progress in safety research, creating a dangerous gap that increases the likelihood of unintended and uncontrollable outcomes. 🧠 Total Job Automation: AGI and humanoid robots threaten near-complete automation of all jobs, rendering traditional retraining strategies ineffective and necessitating a rethink of social and economic structures. 🚫 Illusion of Control: The belief that superintelligent AI can be simply turned off or controlled by humans is fundamentally flawed due to distributed systems, backups, and superior predictive capabilities of AI. ⚔ Race Dynamics Increase Risk: Competitive geopolitical and corporate pressures incentivize rapid development of potentially unsafe superintelligence, raising risks of mutually assured destruction without international coordination. 🧬 Human Enhancement Limits: Biological or neural enhancements cannot keep pace with silicon-based AI, widening the cognitive gap and exacerbating control and alignment challenges. 🌌 Simulation Reality: The increasing realism of AI and virtual realities supports the simulation hypothesis, providing a philosophical lens on existence, ethics, and the nature of intelligence. 🤝 Urgent Need for Collective Action: Despite uncertainties and difficulties, raising awareness, engaging in democratic processes, and supporting AI safety research are critical to mitigating existential risks posed by superintelligence.
youtube AI Governance 2025-09-04T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyykzXdt2o81q17dm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzhy7kqjCnTLb7ye4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxL15j4pTEUpxxgBfJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwsU7hKABRbaRriA-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx7lAfdYX5cQsq3kV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2hCDvuVP0k8_Hp6x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxeSsPFJ2QN5a1bXtR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw8sD9nDWid_JTVYhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgygL5d5ZoLfTwNEoop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY44G6K9FS20On6iV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]