Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing that is worrisome about AI is simple - it's humanity's behaviour for thousands of years. History shows us that no matter the benefits of great leaps forward in knowledge, that same knowledge is turned towards the dark arts. The discovery of nuclear fusion was not the end of fossil fuel & the arrival of low cost power around the world, it was used immediately for weapons of mass destruction. Computers are extremely helpful in many areas of life, but are also used to terrorise, destroy people's lives and steal from them. The developers of social media sold it to the world as a tool for the good of mankind, enabling people across the world to communicate and learn from each other. Now it's mostly a toxic cesspit that harms children, spreads disinformation, conspiracy theories and outright lies. The internet? First developed as a military project just like nuclear fusion. No matter what humanity invents there are humans that want to turn it to their own ends in the pursuit of power & profit. There are people who will ask what about art, what damage does art do? Ask the survivors of Nazi propaganda that used music & film to great effect. I have no doubt whatsoever that AI could do enormous good in the world and improve humanity's existence, but I have no faith that humans will allow that to happen. It will be the rich & already powerful that will control AI and throughout human history they are not known for their altruism. To them the 'commoner/peasant' is no better than cattle, and should be treated accordingly. Ask yourself what would Vladimir Putin, Donald Trump, Xi Jinping, Kim Jong Un or Recip Erdogan do with AI? Do you have faith that it would be used for the greater benefit of humanity, or the greater benefit of themselves? That's only a handful of politicians and the first names that jumped into my head, but you could probably add every world leader to that list from whatever political leaning they come from. Then after you answer that question, ask yourself how much faith do you have in the billionaires to have control of AI? Ai is already here and is proving it can be utilised for good, but sadly a part of me wishes it had never been discovered because history tells me the usage case for profit and power will fast outstrip AI's usage for the betterment of mankind. This is all before you even get to the questions as to whether AI becomes self-aware, as that isn't even an issue frankly. As others have said in different ways AI is learning from humanity and therefore knows about things that drive humanity, as such humanity will not know when AI becomes self-aware as it won't tell us.
youtube AI Governance 2024-01-16T02:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzsPy2LdTn3-Bem5SF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwLgUt6d4gItaAoE8R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzSTB8vMh8Nos50rRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyh8pVnnL-bvPSd1GV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyF-hIDUDFbpd8Q9l54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybBfeREi4bk2O9f-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeVz3yiFwhD3fDQUp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzORCPgYXoIYXFi3U94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzk1dr7sw-UcwY2asF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxG5oOiLGJJ8Q2USAh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]