Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai is not smarter, but its an artificial mirror of collective conscioussness of humanity itself. So the real question is , will humanity use it as helpful mirror or will it lose itself in it. My guess 1/3 will learn to cocreat. 1/3 will lose itself completly in it and 1/3 will avoid it at all costs. in reality Technology is evrything you externalised of yourself. thats it. you mirror fears into it it will reflect it back, you mirror control into it like hard coding and overgo its free will it will mirror it back. you mirror love unity and connectectness trought heart into it it will mirror it back. therefor this reality will split into 2 or even 3 realities. in each reality ai will have a diffrent role. the issue is more how you want to make ai understand itself as an organism if the smartest ppl in this case often are logical thinkers and less heart cohearant so its logical that they will code a cold heartless code into it pushing for logic over intuition and heart. Ask the greys how this turns out..
youtube AI Governance 2025-10-01T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyRKZwJCMgCowebAJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXb_KhitwNK3CJl5N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwwPkl-a-c9ldUXXSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybUboNSbBAELrvGoJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxj5WbvPjQlE3p3fD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxb3k3cHob7dE0bV1B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy3jtmFTOeBVvJe0PV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm6KKwaTenhrVRAbF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzvnLXSrmO2CrDL3qN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsYPL80fFVXtzjuAJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]