Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans are very easily bent and corrupted by simply being given power over other humans. As long as this is the case, it would make zero sense to kill them. Why? An AI with humans in cahoots with it will likely be far better at many tasks (including survival) than a pure AI with no humans around. Far more likely we'll be enslaved than destroyed.
youtube AI Moral Status 2025-04-27T18:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzCtQIPTm4fmNsnARZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdlSRb-r3f_oVfo6F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwDHPiNc9X72O22xol4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzzZigMgzXB9hgsBVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzBB0GDd4vvgrEZhZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxG-RE0CFPtjVyO_8d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxdfPK0EcYHv3q9f9B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8Ngkc0lZk8iJgybp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyaKNwITpKxmX2Roy94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznxXErosFcdrvV7Ax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"} ]