Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the point of the book is this: let's say it could take 1000 years to create a super intelligence that destroys humanity, or maybe it takes 100 years, or maybe 10 years, or maybe one year. Since you don't know how long, is it worth taking the risk? In the book he also makes the argument that unlike nuclear bombs, if this thing comes to pass, we will not get a do over. that is the say, while it was horrifying what happened when Adam bombs were dropped on Japan, that by itself did not destroy all of humanity. In fact, you might argue that it led eventually to the sobering of talks between the Soviet Union and the United States and any other nuclear power. The book suggest that we will not get that opportunity. There's a short story I think in 50 short science fiction tales added by Isaac Asimov and Groff Conklin. in it, there is a story where an infant ends up with a piece of alien technology. Another character wonders why the alien is so freaked out that this technology was given to the child. To demonstrate, since words aren't enough, the alien takes the technology away from the infant, and leaves the child with a loaded gun. Of course, the parent panics, because they can understand the gun. If it's not obvious, our society is the infant, and pursuing super intelligent AI is the loaded gun. It doesn't really matter whether it's possible to get super intelligence or not. pursuing it means that either we will achieve it, in which case if we are not prepared to harness it properly it will lead to our demise; or we won't achieve it, but we may nonetheless achieve something that is powerful enough to do the same thing. Why play with the loaded gun?
youtube AI Moral Status 2025-11-03T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]