Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Up to 31:42, I realized that AI is only dangerous because WE ARE DANGEROUS. It learns from us. That's why you may get Ultron instead of Jervis. But the way things are going, you will probably get both. But, I'd love to see that world where everything is for free - food, clothing, housing, education (even though we may lose our appetite for it knowing AI will always be smarter and has already beaten us to the punch no matter what we decide to stud and advance in), BUT I am yet to see a robot do some very basic things, like picking up all my deliveries, talking to other people to open a bank account for me (because God knows I hate talking to people at the bank) or even do my laundry, dry it and fold it, or hang my pressed shirts in the closet, or maybe even fix the closet because there is always some screw coming loose. There was such a technocratic movement in the 1950s and 1960s that was saying the same things: we will be obsolete and the robots will take all of our jobs. Sure, in theory, but it took another half century and even slightly longer until they could even get robots to see - machine vision, which is a field under active development, for which I am yet to see any real applications. So, what these people are talking about is just a pipe dream that maybe will become a reality one day, but I predict it will be at least another 50 years before that happens. Let's say AI could at least save a lot of people via accurate diagnosis and prevention measures in medicine, something it can actually do TODAY. Has it happened? No. Why? Because it is NOT FREE. The companies developing it, want you to pay for it. And even if you did (your plan does not cover it yet), this AI would probably put a lot of pharmaceutical companies out of business. They want to make the predictions that are beneficial to them leading to more drug purchases, to higher profits .... and to SICKER PEOPLE. I don't know about you, but I think we need to fix people first; their values and actions. Somehow they stubbornly refuse to change for the better and if their current "human" conditions make into the AIs ..., you get the picture. Somehow, I am confident there will be a massive attempt to control AI past the needs of the elites. They love control. So, it will be interesting.
youtube AI Governance 2025-09-17T05:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyGh0Q_9WHDKQ2wFyZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzaYfuY_U3qktZMpgt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwRY6rRBEDLEz7XwAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy3U5uKpL3ySxnbxLd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1PDKXSBwgdr6_JJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxljP7lz9iXkxmsh4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySPrqLtGKbqEFETT54AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzYgI5dcuQISLEHY7t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOUpLRswsBBA1hf7t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwTwZ7kk_FFdq_wZU94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]