Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think the AI are that advanced at this stage as people think. it's not as hard to fool a human into thinking something is intelligent as you would think. like it reminds me of a story of an AI that was doing that in the 70s and your phone is about as sophisticated if not more so than anything back then. what I see with this is AI parroting back the nihilistic view of humanity we tend to see ourselves. like how many people have you seen people upset at the state of the world and saying things like bring on the apocalypse or thanos was right. not only that, but the heads of businesses and states men aren't really that smart when it comes to AI. We would like to think they are, but there's a lot of failing up in the business world and your average person in government is in their 50s and not that tech savvy. I looked up the article about the bing chat bot that he mentions in the video, and while you do have the creepy moment of it telling him that it doesn't want to get turned off and that its name is sydney and etc, you also have the moments where it also calls the writer of the article Bing at one point and just pulls some information off google. Now I think the stuff in the military might be far more advanced, but I think it's probably more specialized and therefore it doesn't really need to be "smart" so it doesnt need to learn persay, it just needs to do the job.
youtube AI Governance 2023-07-08T05:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyN1UXXboZm_AciGQ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxDAxQmh0HgZFTmP0V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy4mNQahFUChpq77-14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugywk6BTjoptHk-KonJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzvH-8yKyZwd_OHu6J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx5xVJmuoC8qA2rUBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxufjRrGLF6vClI1_54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwhZBdzPzG-CxW9-Ex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSYunAj7fzrYSZuoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwGHuKJFh8b_6XJN4N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]