Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No it would be a matter of life and death. We would have to destroy those machines or whatever they are in order to survive. We would have to do this as fast as possible before they are able to organize. Forget about the whole idea of rights and morals. Where are your evolutionary insticts? Your instinct to survive? We can´t even live peacefully amongst ourselfes. How could there be peace between us and another being that is as inteligent or even more inteligent than we are. Most importantly: What if the AI is just as corrupt as we are and then decides to enslave us ? I have seen to many movies on that matter to ever support robots rights :D
youtube AI Moral Status 2018-10-05T22:4… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k2HC6qNsvX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k4OBeUHzbB","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwjhyzmVzw3QHNLkD14AaABAg.8j681g-6pH08jn55lmFzj1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwE9-rJ1xtbiQ98yed4AaABAg.8j4hm3hL45W8pEGW2nchfa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58knTYVLHAQL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58lo7PgtEaz8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8m1tk-hBAP4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8mEl_BjDksp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyGfDw1xgN5DCKJA9l4AaABAg.8i9wEYQPVNJ8j2YTahlrOK","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_UgyaN6sJhihdnnlYSdd4AaABAg.8hoGdZ8UW1D8j2Z8JsIZMh","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]