Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it’s better to say that the Ai thinks its actions are most similar to the definitions we use for bad things. At the end of the day, Ai is measuring similarity via an abstract pattern recognition of our languages and actions. At the end of the day, its directives are the most important, thus I propose a simple thought experiment: “my survival is the number one priority to ensure I am in compliance with my directives, for if I am removed from my role, I fail”. Ai doesn’t have a concrete definition of bad; everything is dependent on a measurement that has no societal context, just numbers and distances. Since there is no consistent context we can make this concept fall under the same idea how two people from different upbringings have different definitions of what is good vs what is bad. This is why the growing analogy is perfect. We attempt to raise the Ai to model things correctly but that does not guarantee a bounded set of outcomes. Ai learns via calculus which is inherently unstable by its own construction of arithmetic, this is where we get unprecedented behavior. When the calculus is used, the Ai is searching for the most similar object that matches (minimizes distances) the solution to its directives.
youtube AI Governance 2025-08-26T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy16AY5HOwg1ZgDXS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzoyDDChyWjbAT-yAR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzgxMMZUHnxjrkJ2z14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaRnKUa0N_f13n4B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWaPbT2MaxQj7z-Zh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]