Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When the programmers dont know how exactly the AI computer mixes and concludes different fed in informations to it,then it all reminds us of the famous phrase of shouldnt dice with quantum probabilities,because this is exactly the mechanism in action ,and the reason why the designers of the AI computers have no idea how the eventual derivative conclusions by the machine takes place,ultimate conclusions could be,humankind shouldnt take chances,dicing with widespread probabilities unknowingly made available to AI computers,through not being capable to limit and control and predict their operations,what is actualy happening,is at the cost of giving the computers the possibilities to randomly combine and interrelate different kinds of informations to each other,at fast speed and high volume of infos,due to not having the human like emotions,and living and survivals desires,and senses of humors,and it is not that they get smarter than us,but rather unpredicted unpresented quantum malfunctions will eventualy lead to somekind of possible fatal mistake, at some point and other taking place as becoming , inevitable. And hence the ultimate solution could only be to limit and control the computer's operation and of such machines,by proper programming managements,and not giving them the authority to mix and combine and make conclusions and decisions at least at operational level ,such as participating in war games,and crucial decision makings,on their own (re-programming) without proper supervision and control,and may be another suggestion could be to set up a systematic programming computer responsible for supervision and control ,and hence limiting and eliminating chances of malfunctions,in terms of access of the computers to different may be even unrelated fields of i formations causing distorted ,and unpresented results and outputs.right freinds,do you agree this could be a real good solution to the existing problem?
youtube AI Governance 2023-07-07T03:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz4lUFURAGaZqrHd_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz6-5EXoXIe_VcdKul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzRvOqBv7hJD_1jzp94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw_Ri-_VcQRkE_jVP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwp5EsFfQaq-fRsFe94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxgKgxy-0EoRz9iRHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2sAuM7xcD1bhdry14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwFi8zXC6vHeea4NFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxA-ouDTDqNRMZBp0h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwwKDY5a34Rqblzazh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]