Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I keep up with AI quite a lot, my stance is that it has the potential to do more good than harm, it's just a matter of how we use it. As for the notion of AI spreading and taking over everything... Well while it's true lots of technology in our life is interconnected, there's also a tremendous amount of isolated systems that work without any kind of connection to the internet. Furthermore even if AI could take over everything, every factory and the machines in it, there's far too much manual human labour in almost every level of manufacturing for AI to ever be able to make anything even if it had control. Maybe in the future that could change but it wont be any time soon for the simple reason that building physical things (lets say automated machines) takes time, and there'll have to be said automated machines for every single thing a human does from start to finish for everything in order to build up from there. Just look how long it's taking to transition to electric cars, now multiply that for every manual human task, for every job, everywhere, it's orders of magnitude greater. AI is still a huge danger, just not in that scenario. AI being used by people however, is the real danger, that being said, cars are dangerous both by accident and on purpose but they do far more good for humanity and save far more lives than they take... A lot of technology is like that, it's on a spectrum, never truly good or bad, just depends on how it's used and if the greater good outweighs the downsides. Last point I'll make, humanity will always seek to push boundaries and advance, it's what we do. The advancement of AI is going to continue regardless, while it's important not to burry ones head in the sand, at a certain point you just got to get on with your life and make the most of it and not live in fear of every "what if" scenario that pops up. A life lived in fear as a life half lived.
youtube AI Governance 2023-07-07T09:1… ♥ 15
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxOJVNIVBWRD3rHoVx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyr5HU4cBDSPKQR8aF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwln1MpxhJXLuM02WJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8fsbKGdPRoMkaMZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwYDmEhA9uWIJ3MPu94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSGMWfarBTpu0N2mZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwc8Q1692ypi552eRt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyB1JZnrVBDoiwgpGV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiO4SjrFlGozmh3Dt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxzV7HNAvenTLNBytB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"} ]