Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the prime directives, according to AI, is profit. You may trust that people who built the foundation of AI gave it altruistic attributes, however, I don't after watching some recent Youtube videos on AI. The robot builders should at least create a vulnerability to exploit if "killing" them is necessary.
youtube AI Moral Status 2026-01-01T03:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTMsPjVnywjCSAkpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyz-8_erJRjrOz5oGR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxR5AvPANxhf7jX2IJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDHvt-VsLC5Pyqk_t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxBLwzo9s0hS9MAI2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTcgkSd8_pthKRX4t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwO6Mt1fKUQ485oojZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXzA9aFDNGUA7BTeR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzhu3Zc9C3SmbOO00p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwj92AB1ql8l2pRbYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]