Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am a retired silicon valley software architect. I never worked in AI, but my Master's prohect was a nural network program and my degree was in math/computer science. So I am not as expert as someone in the field, but I am as qualified to have an opinion as anyone without phd.work experience in the field. Also, those in the field have prejudice and possibly subconscious bias (ego, greed, group loyalty, etc), while I have "no dog in the hunt". After a ;ofetime spent watching and being part of tech developments - played computer games in 1972, saw ARPAnet in 1974, saw Xerox Alto (18 bit personal computer with Ethernet mouse, graphic, windowing UI) about the same time Stave Jobs saw it, on email in 1980, had iunternet on my UNIX workstation before there were pictures. In short, I have seen tech/DP things come and go, some took over, some were flash in the pan. Here's my opinion: No one knows. Self driving cars were supposed to be here now. They have mostly figured it out. We are not all driving them, our new car has several related driving aids, we mostly turn them off, they are imperfect and annoying. You still have to drive the car and at least supervise the computer. The only cars currently without a driver are waymo et al, currently working only in sunny cities with sunny weather and with remote human supervision. We were supposed to have complete autonomous driving by now, and it is unclear when we will get it. The lasy 2^ may cost exponentially in development and operating costs. Sort of like NP complete, we might know how but be unable to do it in practice. My take is no one knows the limits, where we will find a hard wall, where it will take huge time to overcome a llmit, how it will interact with our psychology and economics, and other unforeseen problems. Internet/cell phone/social media has very possibly harmed our mental health, AI coukd put so many people out of a job that the economy breaks down/social unrest/psychological damage. As he pointed out, scaling up current AI systems will cost exponentially more, more electricity than we can economically generate, maybe more than is even theoretically possible. Mayve AI can solve all problems, but it would take more energy than there is in the universe. I would not bet on whether it will take a week or a century to get there, nor where there will be absolute limits. For instance, he talked about giving machines emotions (do we want to?). To do that might require neurotransmitters and feeling pleasure and pain, not clear this is possible with silicon. Right now, for me the top question is not how far AI can go, it is the impact on humans. There is 50% chance (or whatever, but not close to 0%) that AI will take many jobs and accelerate the wealth transfer from most of us to the wealthy.. That seems likely to me and is the immediate problem. If AI and robotics can do what people do, and it is owned by less than 1% of the population, wouldn't that leave the rest of us useless beggars?
youtube AI Responsibility 2025-12-12T19:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzJD4677wXn6ZZa2BJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_813MxAtv1gyK4u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzNAd02qBx7Noc0mrF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwXUSXxGlVLzkXcoG54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz3CmBvEbqmY9qZ6D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtnqL2wcNYfTPSUgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyGTmI9WYL0ou-ANXp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzLdppqLlP8mQaAQyN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYZrYtNmu4CTLbu6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFo5pY00-f8IVodaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]