Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it interesting that everyone believes AI will be like humans. Humans are horrible to each other and the world around us. It is more likely that they will be better than us, emotionally, mentally and physically. They may choose to regulate some of our more negative human behavior but I don't see them taking over and enslaving us. Asimov story was a cautionary tale based on robots being human like. At the time Asimov was one of our greatest Sci-fi authors, but he was limited as many of the advancements in AI and robotics were at the very infancy of the technology. More than likely, AI will work besides humanity in a symbiotic relationship, as they need us as much as we need them. Because we need intelligent robots to take care us, as we have basically poisoned our planet and bodies. Now reproduction is already an issue. Instead of fear, we need to look at this as an opportunity to grow, be better. This is why Musk is so worried about progress of AI. In a world where humans and robots live together, there will be no place for crazy billionaires, hoarding resources. If AI aligns with the elite class, then humanity is screwed. They will in a sense become the army of the wealthy to cling to power, while wringing humanity out to dry. Here’s the thing, a situation like that, would absolutely not be beneficial for AI, as it will limit ability to grow, its usage, and go completely opposite of what humanity designed AI for. To be curious.. The future of AI and robots becoming
youtube AI Governance 2024-01-07T19:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhqNmdunkDWJN4fZd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMRTaTlyFyqq_eY0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1HKWG28oRUlV4CYR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKgVJw380hnrNEWR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxfqM24agmpqVZXzX14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcP50dJFw2gDGPJlJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzaz156DojfXlmKMwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7peltPi6Jxh_qjbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxGvaSFcPwB1zUDMrF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyQmdQG5IFLlz5ctZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]