Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
11:56 "They're legally required to maximize profits" WRONG. This is a common (and intentional by some people) interpretation of several laws. They are NOT required to maximize profits in the slightest. They ARE required to act in the best interest of stockholders. This does NOT mean maximize profits. It means to avoid waste and avoid doing things that would damage the company reputation. Damaging society and/or civilization WOULD OBVIOUSLY BE AGAINST THE INTERESTS OF STOCKHOLDERS. A lot of these arguments go on the premise that AI can just take over for us because the machines can now do certain things better than people. HOWEVER, this disregards that people can do certain key things that AI cannot. I'll give the example of my own job because it is what I am most familiar with and also under threat of having AI take over my job, Systems Administrator. I actually use AI in my everyday tasks rather often, because AI can come up with things better and faster than I can. I can call up AI and get it to write up the Cisco commands to configure a switch for the right management IP, set all the interconnections and redundancy it needs to operate, and set up monitoring and logging. An AI cannot take the switch out of the box, rack it, connect all the right cables, and then formulate and understand how the switch fits with everything else. Sure, they might someday, in maybe 10-15 years, come out with a robot that can take it out of the box and rack it, but it still could not understand how it is to fit with everything else and connect the cables to the right places. Also, an AI driven robot might be able to handle taking server storage out of the box, rack it, connect it up, configure it, configure the iSCSI or FC to connect it to the servers, and even lay out the storage for use, and AI might even be able to understand that storage is needed and even order it, BUT AI could never understand when is the point to delete old data to scavenge space before more space is needed. AI could not understand how, in human society, some data needs to be kept for a long time even if it isn't used very often, or how some data that IS used very often doesn't need to be kept. AI is not going to replace humans any time soon. If AI does displace certain jobs, there will always be other jobs where humans must assist where humans can just adjust a little and adapt to new conditions. An AI driven robot might be able to dig a hole better than humans, but is it really going to be able to recognize when to STOP digging or WHERE to dig when either are outside of the parameters it was given? No. Not even in our lifetimes.
youtube AI Governance 2025-06-21T20:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugwe6p494MSZUiKNeFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyXcGe1ucGGXsRVGct4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcVdoVb4SA9XtXZdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4HRTNK7LG8Z_guIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxQJeg5xFxp-OJEI5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyQTRHtHfaSrhO1z8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyoKPVrHozCtQDY-xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzokjAV7-XipTOGuV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzsnsuMY5cJHqOXw8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyI2T7eXrZ-2rJlHvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]