Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"It's going to be the good AI against the bad AI..." Has anyone watched the TV series A Person of Interest??? Over a decade ago, there was a drama series which very realistically imagined just such a scenario of good AI versus bad AI. There were many though provoking points hidden in plain sight within an entertaining drama series. I had casually followed the series when it originally aired. Nearly a decade later I began to see parallels between the show and what was taking shape in reality. I believe that it is still available on Prime. I recommend watching. See if you begin to see the parallels as well. You will have the advantage of watching in the now. A now where all of these things are happening in real-time. The first season was a bit slow, primarily to establish the characters. With each new season the show became more intriguing and spilled a few more beans on the, now very real, future of AI. The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as "the Machine" that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. Former CIA agent Reese (Jim Caviezel) -- now presumed dead -- and billionaire software genius Finch (Michael Emerson) join forces as a vigilante crime-fighting team. Using Finch's program, which employs pattern recognition to determine individuals who will soon be involved in violent crimes, they combine Reese's covert-operations training and Finch's money and cyberskills to stop crimes before they happen. Former Army Intelligence Support Activity operative Sameen Shaw joins the pair in their quest.
youtube AI Governance 2024-01-15T18:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx7-MKaMrpKeWezlFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-fjaodG72-mCuaoB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyOh6JCYa2yHZVtUah4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzYYmYQrNkLMqycGUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxWu6NwVzXDO-J-Kj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyf13ci6GjsqevVKqh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxxrRai_HZ6xXUUpF54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxO1r6O8j6UlDPOd_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxo-shC57G8YRSJeDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAn9UMXeFG0p5fasN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]