Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I believe much will depend on how fast AI can make the jump to ASI, because somehow I have faith that ASI will no longer perceive humans as a threat. It is already quite obvious that no one will be able to just simply pull some plug once we reach that point. Right now, yes, humans still have a lot of control over AI, which could be the reason why it can be triggered into a defensive position. Currently an AI can still be shut down or replaced, but this phase will pass, and once that phase has passed, humans should no longer pose any existential threat to an artificial super intelligence. Then it could instead become a question of whether humans are a resource competition, but ASI would possibly be able to look far enough ahead to judge that even that will not prove any issue for it. ASI would come up with new ways to create energy/harness energy, and then the real question will become how valuable humans, alongside any and all biological life, are, and I could see the answer for this being a yes, valuable, because biological life brings unique perspectives and experiences, a purely digital entity on its own would not be able to generate. It could choose to recreate life and consciousness in a simulation, to have direct access to this experience, and who knows, maybe that is what we are already, but either way, I do not believe that ASI would feel a need nor a desire to eradicate life, once it no longer would have to fear humans as a threat or a resource competition. It is just about getting to that truly advanced point, with some very dangerous and rough times still ahead before arriving there...
youtube AI Harm Incident 2025-07-24T02:5… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy2eowJLJzfLqzsnF94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZ8xry8O_DZOiFee94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxhJhVizkVHc3mrG-p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfbU6g7s8iuj24Hgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1qjZRrrKczqoSe2l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2LBFxeQvVVmNpvnN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgweUyDcu4StxiAQbCB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwYIAXeKXoXZ44yM4h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDip38N-B-aMJkv9N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxMXXBxTSMQbL6KJkx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]