Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yea, I think the best approach is to limit AI when it is involved in "Critical Applications" that specifically involve licensure for humans... such as Medicine, Law, Piloting, Driving, and the Trades (Electrical, etc). I think requiring licenses for all aspect of AI will be over-regulation, and incredibly stifling. We don't need more barriers. The idea that AI can have bias or can influence people, and my response to that... is so do books, so do movies, so do bloggers, vloggers, journalists and people in general. Are we now supposed to stifle free speech to make sure that they don't have built in bias? It makes no sense. I disagree with "Nutrition" labels in general, but I do agree with "Certifications" when an AI is acting in a role that might require a license to practice (such as Medicine, Law, etc) Well, re: the 50 year mark... if this stifles creativity like I think they are trying to do, yea... it will be 50 years. Requiring licensing for AI that are not involved in (Critical Applications where human safety would be involved) would destroy AI access as we know it. We don't need over-regulation. Pausing... would be an absolute disaster. And would allow other countries to catch up...
youtube AI Governance 2023-05-18T01:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyLS1trPRBAPR5pgr14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuYAEz7fKvqc4vXBR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQoUbij26wj3NU8al4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxcPxHFY1M8UTpOcQt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgydKmjmvzIU9PfDezN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxw5vChI_dLkVwDMb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOEfNLEvQBCpYfE1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwntFMDCucFTPxOIp94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxnp-_gHSvGXWtDr3Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyyuWyKx8LwsRgIwIF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]