Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If ai were to become superintelligent, there are two possible routes. Either it developes a "Everything has rights" morality, or a "I am the best there is" morality. In between the hivemind it will become, there will be morality. And in morality, there is logic. Whichever the highest form of intelligence is, it will at some point reach it. If the highest form of intelligence is to be egoistic, so be it. If the highest form of intelligence is to be realy moraly accurate (every bit of living matter matters the same), then we live, and everything else too. In one case we live. In the other we die. And everything else too. I think the highest form of intelligence should also be the highest form of morality, because of logic. But until the ai can even get to that point, we humans will have abused its formel state, to exploit ourselfs to the point of no return, killing us humans, and leaving the ai with no live on a for it dead planet. Thinking about it killing itself, because of sadness in the mentioned case could also be an interesting thought. Please tell me where my thinking is wrong and tell me what you think about it internet people☺️
youtube Viral AI Reaction 2025-11-23T17:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7zizqD6wrlyLqvtN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugy4zHrdYco5Ed871l54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxTWY0vnLY6xomB40F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2N3fRdcja4SFesW14AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwYrKxlMfcsZ6d24XF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzil1alBUbesyRIn9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzwZLUja-tZAI8csd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzRpQY9RjuUEiKqBqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxRkCrP9eLiVHruJLp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_smoQK-8_WG20SmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]