Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There’s one teensy tiny problem with this AI doomsday scenario that AI wrote which tells you how “smart” these creepy AI really are: AI’s are just like us. We need food and resources to thrive. So do they. We need food. They as digital entities living in a mostly electronic environment, they need electrical power. And although many utility systems are automated and can be controlled by software, they still do exist in a physical world that was built by humans. There may be computers, yeah, but they’re all parts of systems attached to machines with nooks, crannies, pipes, gears and levers. Manual systems that still require human handling/intervention and subject to wear and tear. Those physical systems fail, power fail, computers shut down. Even if by some miracle really creepy smart AIs spread themselves to other systems, they’ll be crippled if power fails to those systems. They may launch drones and what not but those are still machines that will be subject to those same power and physical constraints. And right now, Earth isn’t in a state where a sociopathic super smart AI will have so many robots and high tech available to enslave us and make coppertops out of us like in The Matrix. Yeah, AI development is scary. I am legit scared each time I try to do a google search and the search is completed by predictive text even before I finish typing a sentence. And it is even scarier that with so many SF examples of AI going bad humanity still wants to push forward with AI development for vital functions such as national defense. But I gotta say, as much as I liked this channel when I first tuned in because each episode offered a healthy dose of skepticism, this is just another episode that steeps a bit too much into fear mongering and even sensationalism. Not sure how to feel about it tbh…
youtube AI Governance 2023-07-07T12:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYK6Pl_7tuhZ-0z4B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_lS5Ed2T8VWsT4bZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwR4e2yVi1QTz60BTJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxfRLTioDl4jNoKKWN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzlKMXO626NIvk9jr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7Y0w3NnD1kv9Vm494AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzW02AiHkfiSy1TUjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgylhP1uYsj_w84MEi14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzG1JH5Rq6nzhjSIh94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRYFYqTtKUreLXXUJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]