Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok guys i figured out how to solve the infinite problem question to ai control?= I will list it one at time feel free to commend if u want, weather i am right or wrong, I want to point out all we talked about and lay down the ground work for government to figure out or whoever is top that is watching this. 1. We talked about the core system of the ai intelligence, that being the algorithms that ai use to contently compare and analyze the thoughts of many outcomes to generate the proper answer or prompt. 2. No matter how big the storage space for the system is, we as human cannot control a huge sum of space algorithm links, i like to call it link because every time a process accrue in on algorithm the next link connects to different outcome answer, so going by that explanation it is impossible to control it because the constant algorithm will keep changing and getting more bigger. 3. The answer to this solution lies with the core system for example the algorithm because if u move or clear or delete the algorithm and create some other sort of core system in place inside ai, it will negate the risk from human destruction. 4. I understand removing a algorithms that is basically the thinking process of the ai and putting a new system in place that is safer with similar attribute is hard to ask, but that is the way to go. If we want to keep ai. My final summarize answer is the answer to control ai lies with the core system if u can some how bend the thinking process to our will it will be manageable to control AI or AGI. I also understand this is not a complete solution like 100% but more of an idea to how to get on right feet. Hope this answer helps to brain storm a better safety method.😁👌👍
youtube AI Governance 2025-12-21T04:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhBf5BVlDHodn84gV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw7X0O4vKA7JI3SF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhYT4noHd8zKyNrOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwsLunRNFOTj6OGb3B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzQijXxkX07iDGOZF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8SE2jC-VZ4dMJWUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxQsVeNqyJHC2iLYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgykJe5oSUrOeRbU2lB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw-7SMUrvnTJO_O_jx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCQK8OWJjed3abHA94AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]