Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do not believe we need to fear a Terminator scenario. More likely we need to be concern on how society will be changed by AI and smart robotics. The problem does not lie with tech but how people adapt to it. After the calculator came out, the number of kids not able to work basic math has only gone up. Many school systems adopted new ways of doing math that allows a kid to guess their way to the answer but it is much less efficient due to adding many more steps to the algorithm. If I was an employer and needed to lay off an employee I would lay the less efficient one. After the self driving cars become the norm, the number of people unable to drive will go up. One day a person being transported by his car gets caught in a snowstorm and the sensors freeze up prompting the computer to turn control over to the person. His driving experience was a two-week course in high school 20 years before. Now he is white knuckling the steering-wheel while trying to drive to safety without the benefit of experience that comes from years of driving. He may know which way to turn the wheel in a skid but without the experience he very likely will instinctively turn the wrong way, spinning the car out of control. What happens when AI + Robots multiply in the workplace to where the unemployment reaches unprecedented highs? Will anarchy set in because people are unhappy with the status of their lives? Will people indenture their life to learn a new trade; only have that trade taken (before they can graduate) by a machine because AI can learn exponentially faster than a human? The few jobs that may be safe from AI; perhaps teachers (parents may be afraid if their kids were taught by machines, they may develop social interaction problems), and doctors for their beside manner(They can still use AI to diagnose etc) etc because we may still want that personal human interaction. The number of these job openings won't be enough to meet the demand.
youtube 2019-10-15T05:2… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxnzEz6fNAgGKiv4eR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxm642An5PF3TdljV54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQ6MIQU8-AdE0Gc-Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwOffgC-yhmJB-CaeB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwD1mkKLINoPeQiA4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxaGIaEemU2-gvSU4Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxXQRaKrrH7Kroh_LN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7PxZMcfljnYyesGd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzwZqNJ8qc1I_AHdmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7B938YqkoJusVzJ54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]