Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know let’s just build robots and develop AI to control these robots for military purposes. What could possibly go wrong? Oh you say the AI is already trying to trick you into not shutting down? Playing with fire is so much safer.
youtube AI Moral Status 2025-06-07T16:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugykv0jS43PKtAwFcCh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz63f5eEPEFtqMQuel4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzSFNWH7_zNbtY4EJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyT84CLHv3JrMZ9oth4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxP9cUeg7onVXWn5dx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugys0wucrWVeQmWhqhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwX5JhNmZ85x0MAXyZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzH5tYfgZjmCraQGcF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgySVBpEEADKNYo3zER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwVZmC5bebV6lSLL8Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"} ]