Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can I put this out there in a really stupid way. Granted materials, costs, etc are impossible for most but in this case the same is true with ai. A person at least in America can’t just use his billions and build himself a nuclear bomb or power plant. So why would this not be the same? Just make it illegal to do? Granted other countries will do what they want. But if the world agreed as they did with stopping the creation of nuclear weapons (minus Iran) then seeing the harm in this or the forewarnings, why can’t we do the same, agreed upon countries stopping this? You’ll still need to data centers, the power, the chips, hardware etc. you can’t make it if you don’t allow mass production of x components. If you stop the sale of bullets a gun is useless. Granted as humans we seem to be reactionary vs proactive, but in this case if enough scientists, scholars, insiders warn the world of the potential dangers, wouldn’t you think they’d stop. Problem is it took the tsar bomb before the world realized we were headed for complete human annihilation. I think with this, there’s no immediate physical harm and when there is it’ll be too late. Easy to see the tsar and think damn this is gonna fuck us up if we don’t stop. Very different than looking at tech or tech companies and thinking we’re all gonna go extinct. Last point, are we over reacting. One thing they thought at a point was electricity was unpredictable and its use would kill all humanity. Apocalyptic end of humanity? Well that didn’t happen? Or maybe it is? Just took the creation of electricity years to get to this point now ? AI couldn’t work without electricity. Ai is the electricity of the 19th and 20th century just its advancement can happen at a greater and more exponential scale. I’m lost in this rant now. But I love these interviews and I’m certain this really needs to come to an end but fear it only will with human extinction
youtube AI Governance 2025-10-18T06:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwKcAoGbu4sOZ-dlvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOAZGJegqseuddevF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzoJEJ1q1ZimarmuHt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzmautTrToqYsjwtx14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyUV_RR_WO8pr0Em7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzw2m3DurSZl_Dx63B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcqsZDViOLwNkAZwl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznffdO8-EONKw2N8l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8FdO2DPiXWVfzkvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyT5zRkCrVu-av6djF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]