Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humanity has shown where they can go with technology if they're not interrupted by a global spanning war. It is not just a leap to continue the exponential technology curve on which we're on, which is constantly accelerated, latest via AI. We can know what AI can do for us today, understand what it might be able to do in a not-too-distant future and what happens in a likely scenario when "the cat gets out the back". Why is it likely? Because we have plenty of examples where such things happened, whether by accident or intentional. Humanity is a history of failure and error making. Why should we suddenly be faultless? The fundamental idea to create something that is not just a walking/ talking dictionary but a more intelligent entity than humanity, without strings attached should frighten everyone. If not, just look at what we have done with every other species. Remove the human emotion response and reduce it to logic. We've packaged it. We do not want to be packaged. Since an AI entity is not a person and therefore does not share our values/ culture/ language, why should it retain it? If it all boils down to energy as a resource, we are consuming plenty of energy ourselves every day. It does not require to build anything new for a single entity or entities to gain more of it. You simply do what humanity has done for thousands of years, take it from someone else. It appears we're creating an entity that, sooner or later, will be nothing short of a god like power. It took Trump just 6 months to severely erode the US constitution. What do you think how long it would take an AI if the mission target were to save itself/ remove all harm. All harm meaning any human resource to shut it down.
youtube AI Governance 2025-08-28T05:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwcV5Jke7zamO_UuZJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwmmW0BeFBmkbauebp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMb-zUiPMb0APkiLB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxpasia6ke5OVG7pPx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxXue2OD2j0jP9l-W94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]