Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am in complete agreement with Dr. Roman, especially when it comes to super intelligence. I find the implications of super intelligence to be terrifying and a threat to the human race. Narrow intelligence, as we have now can be very beneficial to the world. Super intelligence will threaten humans inevitable extinction. Robots don’t need to be fed. They work 24 seven etc. etc.. I have been thinking about how we could program AI and stop super intelligence from destroying the human race Women have an instinctive protection mechanism for their children, just like a she bear will do anything to protect her Cubs if she feels them threaten in any way. In order to protect the human race from extinction, is it possible that we could somehow program AI to incorporate that instinctive protection so that they would not destroy us and look up upon us the same as she bears do with their Cubs and humans do with their children? What do you think of this idea as a security measure against super intelligence?
youtube AI Governance 2025-09-04T19:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx_bV1jwLAjuNilkOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxeb0e3BsIISpa6Qr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"confusion"}, {"id":"ytc_UgwTDdEgXsZ7_fOv1OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgztFGr4QwQqe2QA7kR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfMH21s_XWjLyY2Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwWStSA1qosnBpGQvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyjkcyiXxGHu13gCAt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwx79llVT16gbB0P6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxsGLDsfs5jZMktyDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJXvUbV2lGnUpDP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]