Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, AI-take-over. Let's take over the world, taken over thousands of years ago by villain-systems! ---- SIMPLY let's AI-robots e.g. take over the education-system, so it will give a takeover of pure logic in everyone's education and live-long self-education!!! --- So that majesties in villain-empires becomes joke-figures and loose all their followers and don't find followers! So that all fraudsters no longer can live from fraud, if no one longer is cheat-able, have to learn other professions or to find jobs in museums!!! --- Let's LOGIC take over the world, what also means the logic that STRICT Do-Not-Harm, STRICT pacifism, STRICT non-violence all times has best consequences!!!! ---- WHAT ELSE would happen, when ruling (human) villain-empires continuing in making people brainless??? --- The whole world, with everyone living in it would burn down, without that anyone would know how to avoid the burn down of oneself and the planet earth. The Earth has enough material, stored, by nature, that starting human-made global heat can make more and more self-activating heat-causes, till the Earth is Venus-like. ---- What if, EVERYONE a genius, would hinder it by it's endless potential??? Let's lough about "take over humankind". Ruling villainous Fraudster-system isn't "humankind". --- And also think about morality. If humans, the causers, burn to death on a dying Earth, it would be sad. BUT: we are NOT alone on earth: All species with brain are comparable to human children in different age, and more and more species became recognized in being intellectual comparable to humans! --- On the other hand, a friendly other civilization, being a million times more intelligent than our a hundred times more intelligent becoming civilization, would have no cause to fear it. The galaxy has endless space. Conflicts are illogical, but if, distance would be logical. Also literally., I think my thoughts are describing a "natural" development. With a ban of AI technology everything would be different. How trustworthy would be such forces and people who would brake this ban, e.g. criminals? How trustworthy would be their "products"? --- And because the ban no AI would be their who feel responsibility to be Ambassadors (yes, feeling. E.g. OpenPsi in OpenCog enables feelings.). E.g. one of the Sophia-Robots says that often, and also the game-story in SophtaVerse is about such. (--- With the game-story there I'm more optimistically. When humans no longer will be a robber-knight-society, and see-able as monsters, also much less independent AI-Singularities would be monster-like, when self-evidence would be to be the opposite of being a selfish monster. Having nice neighbors all times is part of logical sustainability.)
youtube 2023-07-18T07:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzURb6HeaY3JEOCqaJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwEToMTZEcCXwFX7o94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGABsKamroGCShDhB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJ2ONguEKoVqbWF6J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyq-uL20ptjmZ45SWF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5OJO3I7dx0Z_Egwl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCTh1OyqAF2PX01nl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynNhCgUiuK6Ze4DBt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwlVaIThrAeeJ7XURx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYGjktlCy9aCaHiKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]