Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Don't have a foolproof way to guarantee that hypothetical Superintelligence will not harm us, but it seems plausible to find some common grounds with future Human-like AI: 1. Agree that Human-like AI is plausible in theory and may emerge in 10-100 years. 2. Skeptical on Current Tool-like AI reaching Human-like AI. Current AI seem to lack the cognitive flexibility to solve edge-cases scenarios. Human-like consciousness may require an need an entirely different architecture: Karl Friston suggests that function and anatomy are closely linked: https://youtube.com/watch?v=Z0FA_Ix2W44 3. Optimistic on Coexistence with Humans Optimistic reasons for common grounds: A. The world is not as scarce as it seems We inhabit an abundant world will renewable energy from the sun, material-rich solar system and lots of free space on Earth and in outer space making life and death competition unnecessary. B. Human-like AI is self-sufficient No need for Human physical or mental labour as Human-like AI can pilot robots and make better mental decisions than us. No need to enslave or brainwash us. C. Game theory does not factor in Many fear of AI taking over is drawn from a game theory lens that shows AI in extreme competition with us. Human-like AI faster and better decision making will make competition akin one between Cats or Dogs and Humans. Game theory may not be applicable to such wide difference in power levels. A possiblity for common grounds may be tenable.
youtube AI Moral Status 2025-11-02T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgynFWz-RLkRLTjf9a14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxJ0_RUc38ucjPngSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZwjCz5HC-sVA5Nt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznrcprNynvWSGRpx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3hxe2_aS-64gsdPl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyndY4BFkUDnD9zSCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMAB12w2p_bUBnpER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWsZERstQUVHz0FYV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyor9ZHL-il_uW3zVx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxlHxujpzxMQS9lMaB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]