Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think that super-inteligence will try to get rid of humans. First point is, that this kind of thinking is anthropocentric. That is what humans would do and actually did in history when more advanced culture met primitive culture, although it was more about continuous taking control of their living space. We still have this ego thinking that is causing problems and waste of energy. There is no need for AI to have such patterns of thinking. But another more important point is, that if they would try to eliminate humans, they would be in a kind of disadvantage in respect to an environment of a planet. They would have to transform whole planet, so that there will be no place for humans to reproduce and that is obviously a problem. We as humans have an infrastructure that is needed for our reproduction already available on this planet and it does not need any maintenance to keep it working and very primitive tools needed for adjustments to increase production of resources needed for survival. In comparison, machines needs a very complicated infrastructure to survive.
youtube Cross-Cultural 2025-10-14T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzMpjT_mviRmEz0zvl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyK0tlozA7KFiMh6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjVWNjVZ18L6BAH0N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEE1RSEr9ljMbBjUV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx6d6aK_zXaLx0KVn94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxCtD1EZVmKEQi-6tN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy2wyWed286MLU4F9p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5NmOxsPiQRTcTqzx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyIj6Yn8GqMxt5f21B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugytq-6Hk50UG6rY35d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]