Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"AI in and of itself isn't dangerous" - well you're right from the standpoint of all previous human history, when ALL what humans did was creation of TOOLs. ASI is not a tool by definition. It's an AGENT. There is a little difference here. Tip - human is an agent. If you'll spend a bit more time thinking what will happen IF agent, much more cognitively capable than the humanity as a whole, will come to life. Don't forget to realize that biology (humans) and silicone digital ASI have fundamental difference right in their, well, foundation. It leads, without any doubt whatsoever, that native internal ethics, moral and system of values of ASI will be completely different, in fact opposite, to that of biological humanity. If you really believe that less intelligent creature can control much more intelligent one, which will actually have the whole ecosystem under control and directed by it, including but not limited to humanoid robot plumbers who are BETTER than human plumbers, good for you. I afraid there is not much to discuss then, but if not - it's impossible to come to the clear undeniable conclusion - invention of ASI mean extinction of humanity, not less. As a matter of fact it's not a worst case scenario. As Roman Yampolskiy suggested the real worst case scenario is unimaginable torture coupled with life extending technologies if ASI decided to "learn" from biology in an effort to even more efficiently improve itself. Experiment on humans, Investigate us, access how emotions exactly affect performance and actions, put biology under stress testing etc. I hope you got the picture.
youtube AI Governance 2025-11-20T22:5… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugxbq1iDaUY11XLRwTJ4AaABAg.APVR4BL4OeVAPVVrbNphFt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxbq1iDaUY11XLRwTJ4AaABAg.APVR4BL4OeVAPVY2O9izB3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugx_BE7t2X6gvSrZwAx4AaABAg.APVOfnENG6wAPVs-oonrJb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz8s_MX-jTsuGpeZf14AaABAg.APVCp25qmFQAPlR6HPVD36","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxzgzRqovZq1KK7Fpl4AaABAg.APUkLVbZCluAPW3pezqIRQ","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgxirpTzfHAXF0fY5d54AaABAg.9nVKYR9ijK_AOOmAM6Mfe1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwE17ATKWTi9-m9GCV4AaABAg.9xSv8X9V_4k9xwF2746L8_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyHirI6rStaiR40A9F4AaABAg.9w2p505ECPC9wDxHWmxkkk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw-cLmtCOPGkLKmzpR4AaABAg.9uudkRDHreU9vaSppc34PM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwUEa-SjOAE-4H-Ivp4AaABAg.9slYRxeNJCS9yBhNY3dtfh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"} ]