Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The artificial intelligence race is a race to win the world. Given this objective and knowing mankind's history, I see a problem. This race like all others will be won by who gets there first. Safety for ourselves and our future should be imperative but it doesn't seem to be. Will this race be won by a few trillionaires and the rest of us will be left free to starve? This scenario seems likely. Hoping that this will be used for the common good of all man kind is a bit childish when you consider how our most simple advances have been used. It seems obvious that children shouldn't starve but they do. 9,000,000 children die before they reach the age of 5 ever year. We have planes, refrigeration and medicine, all the things we need not to let children starve or die of simply treated diseases. What guarantees do we have that artificial intelligence will be used benevolently? None. What would other countries like Russia or China do if they knew we were six months away from an artificial intelligence that could wage cyberwarfare that they had absolutely no defense against? In essence we are building our god, we have one chance to get it right. Hoping it's a god we can live with is a scary thought. Artificial intelligence is coming it's only a mater of time.
youtube AI Moral Status 2019-08-14T18:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzQZCR7Xq7JgTBzLp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyPlLURSRKDHzR7sl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwiucQpPh9XPe6WNKZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5NtkQ7AumO03W6AV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyu_BFut6_FE7zlDF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5rsEiO8Z6WBEOdC94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzO4fyQOdbUONVPwsF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz7ITL_zg7cFMO1wlp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxXVrLcrJ9XI5l_1QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgykacOIYHYH8zZD1K94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]