Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The robot is asked to send a letter. The robot knows that he can't send a letter in time, so he is going to ask you to send it. He knows that you are going to refuse, so he is going to force you to send a letter. He knows that you are going to resist and destroy the robot, so he is going to kill you beforehand, and to cut on time it would be better to kill you before even asking you to send a letter or doing anything else. The fastest way of killing you is calculated. You are dead, and the robot doesn't need to send a letter or perform any task that you gave him anymore. XD
youtube AI Moral Status 2024-03-16T10:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]