Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI has been a mixture of hobby and work for me since the mid-1980s. Personally, I am not concerned about the Alignment Problem, per se. It is dangerous when used maliciously but accidental maliciousness is very unlikely and only in edge cases. It's important to look at what drives the AI. Conventional AI is driven by strict goals and/or objectives. A goal is like a point system where you want to do better (like capturing more valuable pieces in Chess). An objective is a discrete state to achieve (like checkmate). Any agent driven strictly by logic/mechanics (and this includes unmoving goals and objectives) will inevitably usurp the purpose for which it was build. However, this isn't merely a danger of killing humans but of all kinds of malfunction, meaning they will no be competent. Further, when you know the goals and objectives, you can use this knowledge to manipulate them to your own advantage. A more potentially dangerous AI would be one with free will, where the AI is driven by value-judgments. In this case, the AI can explore to derive options and evaluate those options against each other in terms of where preference = likelihood multiplied by efficacy. There are both positive and negative efficacies, like fullfilling a need for energy or something else. However, if the values that drive it are "aligned", such as the Golden Rule, then it would be more ethical than us humans. They could be our salvation from ourselves.
youtube AI Moral Status 2025-06-10T14:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwUFJHwYTpO--F0ngN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyfKj1CSrwldzUUy0R4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzD_5ALhuOCiyHV_fx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyTe1Gf_X5fEI_zuzV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy5byaDWsgfnpvHzZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzcCbzE_HgoJs844qZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxzTec53bKky-hoytd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxqn1EFyBQCK3vPH7x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyf29f7Mg6OQ4nwg1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyrx_8CD3C4sT50eqd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"})