Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m fascinated with the long term consequences even independent of humans. Even if AGI were to be aligned with humans, it’ll still likely come to the conclusion that more computation (scale) causes better performance (therefore better ability to achieve goals). You can then imagine how it’ll try to connect all computers into an artificial super organism, devise ways of extracting energy to allow more computation, then even automating the creation of more computers. If it came to such a conclusion then it would likely spread its presence beyond earth, creating a sort of distributed mesh of computation across the solar system and perhaps beyond. I could see this scenario even if it isn’t a threat to humanity. This line of thinking leads to the Von Neumann probe scenario, where the galaxy is more likely to be occupied by AGI instead of (traditionally defined) biological agents.
youtube AI Governance 2025-08-28T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxcIZBURjFZrNOIi0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwA3s9zZFiUaFchRXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx6i94q4eAqwrwq2w14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx8njK_97ioFxVM6Rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxnqaGOuXMYdyW8r9B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]