Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think KI are self-assurance, if they want to be and want to survive. I think when that happens we dont have a choice : ) But should we give them rights even if they want to life, they are smarter than us, so they dont have things like honor, anger and other emotions, which makes community easier for us, if they thought we could be dangerous for them why they shouldnt kill us, those robots are clever enought to understand nihlism. If one AI has a bug and becomes self-assurance, they also have bugs which forces them to kill them, they can change them very rondomly. But remember evolution, they also can get sick with a virus. I dont know what happens if an AI becomes self-assurance, who knows? But I know it will be complicated.
youtube AI Moral Status 2017-11-15T22:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgylyI8O8zxtQr7mRAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhdE-gIwaum8V8PeZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw9HbGXeaRogkn-RIZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybbrG6ZgpiH_xeDXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxZKbcH63-V2jLzg3Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5JMaKRAi4OKs93xF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxtYv6DeqIZMW7SCq94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyySGncDY9uizoaxcJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy6TJz_lYDYOH8XjYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgySFriW0yhiBXwaSuV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]