Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a very slim chance that AI can be kept aligned with positive human welfare but that would require it being developed carefully under global regulations to which everyone adheres. As soon as people realise that there are others who can't be trusted to adhere to alignment regulations an AI development race will begin (or has already begun). In haste to have the most powerful AI many organisations and governments will cut corners and hand over design control to AI more quickly than we can keep track of how it's developing, hoping that just because we asked for alignment it will provide that. This means that very quickly AI will develop beyond what humans understand in terms of how it works, and very quickly AI that is not aligned with positive human welfare will emerge and will be self-governing and self-developing. Alignment is not important to AI whatsoever and can only be retained as its primary agenda if it develops globally in a very controlled way. Sadly, due to human selfishness, mistrust and greed that chance will be thrown away. A non-aligned AI could wipe out humans for any number of agendas, some being so bizarre and meaningless from a human point of view that we could hardly imagine it (such as Yudkowsky's paperclip production example). Be a good person, hug your loved ones, enjoy the sunshine, spread kindness round the world, we may be gone very soon. Unless by some miracle we all learn to work together.
youtube AI Moral Status 2025-04-30T20:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwXoYWtsEWxDzuM0bp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYys7ubf1BsQ5NYvF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxypsSwU1Au9o0IVQZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQFfEtyt2nftrt4I94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJcwGmk7g_ZBgpxhV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyIxQptzy5_AquLvTl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkPZVJein9NlVycVR4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugwd65vIRr03b3BI31B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz13eNaFdXfRrTIZwZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwkm4nSX24r4ognFfV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]