Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The answer is easy, if AI takes all of our jobs we will no longer be needed and those who control AI will surely take us all out. That "communist" utopia that some people believe where machines will do everything needed for humans and nobody will ever have to work again will come true, and virtually all of humanity will be killed because why not? They will not risk having people that are of no use to them and can only be a risk against their power. Cloning is already a thing so if they for some reason need humans (be it emotional, sexual, etc) they will just clone humans that would be conditioned to never be a risk for them. There is a reason Palantine's CEO refused to say "Yes" when asked about if humanity should continue or not in an interview. The race to create the most powerfull AI is not about money, it is existential because they all know what will happen. With the speed at wich tech is evolving in this direction of independence from humanity this will become a reallity sooner than later since nobody seems to give a crap and just keeps feeding AI because it's fun, not realising the huge risks involved for all of us.
youtube AI Harm Incident 2025-09-22T09:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwT5unX1g55VMQvf6N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx4j3PlSGGiNl0rL3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzSXkMEmUSix2qTNNJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxmYQ3y7A749XJheSB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwssTc5nmW3iE0ZJ_54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8yEn5wpjBXkGTI5F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwR3uIWwiwrUin7uWN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw2aNjLcFifGevKs_B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxfpjM37k_VdjcozHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyn3lQzDdCmlOgaR5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]