Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to implement Asimov's Laws of Robotics, which includes the law that a robot cannot harm a human. But if we use them in warfare and policing, which is what some are considering, then that law goes right out the window. And then any ideas or knowledge about harming humans goes right up into the cloud and all of the robots will have access to these ideas and methods.
youtube AI Moral Status 2023-02-02T09:5… ♥ 40
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxsbARImU108B3eaMZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyCRABh06CZnRPIYb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyWfXrv8R3DF6O-O5Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy8Jt_dcjLoahVMEkJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzjeajTHKk5kU7BWjR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3UpjyTugarxlTA394AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwU_v6o88baCa6H9HZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzt2yzugBvhCsGPdwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyVhp3D8pzxH4Q2lHN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwhecwuPVtPxvx5Mwx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]