Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the biggest issue with any AI we create will be the inherent flaws of any system ever conceived by human minds. Systems and software in particular have a distressing habit of running away from us or being riddled with bugs or features in design that don't make sense. Like a particular video game being inexplicably unable to launch when you take out the picture of a pineapple included in the files and not even the developers can explain how exactly that came to be. I don't think the limited AI we employ today has the necessary computing power or memory to ever become self-aware and those limited resources also puts a very hard limit on the amount of expertise an an AI can accumulate in any given task. Our limited Ai is very good at memorization and reacting to very specific circumstances or inputs but its available hardware severely limits how much knowledge it can retain before it either crashes or resets and it simply can't innovate at all because it lacks creativity. Even AI art just consists of compound images assembled from images taken from the internet. it doesn't create and until AI can create without human input we have little to fear from AI itself though of course humans would still be able to misuse it like any other tool we create.
youtube AI Governance 2025-06-23T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx-RomRRFmGz4-OR0R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgznzUOtXUdY1nQL_xl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-6vXwsWucHHZKofp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtJ58GDqCjCa0PHtV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgymoEQMIZbJrA_TqOp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkGrCoeZeYNiTPumR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyrIpMCt5KxbH6wRR94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRBUKo6z38W9HyaUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGiyIFNW15Zc5n5QR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSnf9iHJTpQ3e-FO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]