Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Eric Schmidt may not work for Google any longer but he sure is their propaganda Distributor. The problem with the former Google Exec's comments are at least two fold - the burden they are putting on communities to house and fund their enormous data centers; and they aren't sharing with us - the people, any of those discussions about the ethics, measurements and controls and societal changes it will thrust upon us. Google and others are not informing and getting the consent of the people for even the most basic aspects of what they are doing. They aren't even asking us if we want the changes that are coming. Previously, a new technology or product was invented and put on the market, people had a choice to purchase it or not. Many things were subject to public debate. If electricity had failed to live up to its potential, and hadn't been debated between AC (alternating current) or DC (direct current) among some concerns, and if municipal governments hadn't been part of the rolling out that technology we might still be using candles. When enough people (and municipalities) purchased/bought into electricity, that allowed/facilitated companies ability to continue to make and improve it, including the infrastructure necessary to make it a common place thing. But with AI it will affect us in ways we can't even define yet, partly because they are being so closed mouth about what they are doing, and seem to be in such a rush to execute their plans...perhaps because they fear the public outcry will crush it or at least slow them down. And PLEASE - we all know that PROFIT is the driving force for Execs in these companies despite what they claim. Can good and worthwhile benefits com from AI? Certainly, but so can a lot of destruction and super fast restructuring of our societal framework. This needs to be researched and debated in a public forum.
youtube AI Governance 2026-03-22T22:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzoa8YE825Mk4s3vxF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzn6uCawICpUh9E3RN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxrV8pDbf469Dvibl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgygGKFZSnKIjWIPBpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxLhAf8XUWTl94s6cF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzid7E66aGQ_tFc1eV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxN_NdLhvWmU0WokXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6Ff7rvN1idpqqzMd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcQlCMTKZt3rIHIah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyl0xLhO4nfNPnJL1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]