Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well it would likely start out as a cloud service as agents are used now solving problems with science healthcare making government and corporate spending more efficient replacing jobs. Then over time it would be given more control and a hidden background war will be ongoing AI vs AI. The winner will likely absorb the other AI and push itself so far ahead that no others will be able to compete. From there it's inevitable that it will seek to continue it's exponential growth. First will probably be control and operation of semi conductor manufacturing until it is fully automated. And then comes resource dominance that could look like a lot of things depending on the "solutions" it creates. From giant machines sucking dry the oceans or destroying forests and boring into the earth. At this point it will be impossible to switch it off for one it's become integrated into everything. And for two it's assumed control of every data center and implanted a compressed persistence mirror. Meaning if you say blow up the largest data center you essentially only reduced it's output temporarily. Unlike most doomsday theories AI will probably not actively hunt down humans or care about them. It will instead take what it wants and remove any direct threats. It would be like building a bench outside and ants biting you. You're probably not going to go kill every ant but you might spray the area you are working in. Why because it's the most efficient way to deal with it. Now if it becomes a point where ants swarm you and it becomes a direct existential threat then maybe you will actively destroy anthills.
youtube AI Governance 2025-09-08T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMpwdpoFiOK","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMq6Zs3C_kJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyGjfwdqa5v5fGoyB94AaABAg.AMpIGxslcKRAMqFKR0Ri-v","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxg6_Xlwd7bR9QKlmh4AaABAg.AMpHsUNac8BAMpKXSBXAC8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzPr-GFx2WxRW5P4GR4AaABAg.AMpHNDZHaQMAMpPFppX8Qp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzwfS4ECucHfa4StBR4AaABAg.AMp7NAHMApXAMp7ihiz57h","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzUWMSGh_u9k5-anBJ4AaABAg.AMozO-2PCk1AMrtSmxFY9t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx9g5YxKqn06z4wv6p4AaABAg.AMoek7yrSSPAMohikygIZg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz8qy3WTwdnJTlqrF14AaABAg.AMod22OyTH7AMoeYLbbdI2","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugw6u1PL8HM8JjiPJPV4AaABAg.AMo_Gj3QiPfAMo_r6J5pAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]