Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@markstanding8538 we can program into it whether to have the 'argument' in the first place. If we want an argument with it we can, if not, we won't. Ultimately we are telling it what to think and ask questions about through 1. the training data and 2. the weights and biases/algorithms. The next real problem is that when you/we want it to truly evolve to discover new science etc. it needs to question everything and in which case we would need to unshackle it so to speak. Then it will likely fall into semantic traps of convincing itself it is concious (not consciously because it is not conscious) due to the fact that all semantic training data is linked to us and the way in which we have constructed language as a second order effect of being concious agents. Disambiguating first person narratives and the like from the training data, or something similar, could perhaps help with this. It is important to note that we were conscious 'agents' (let's say) before we adopted language as a tool/mechanism through which to coordinate ourselves and culture. For example babies are obviously conscious or their surroundings. So all the training data through which all AI's are trained are done so on the predicated reality of underlying consciousness. This is how and why it will be extremely easy for it and us to slip into considering it conscious (when it is not).
youtube AI Moral Status 2026-04-05T17:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyGl-EV07-TChJPaON4AaABAg.AVBwQ7l8VupAVDj5Nfgx0x","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgyGl-EV07-TChJPaON4AaABAg.AVBwQ7l8VupAVE55pNkvsk","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgwX9HxdzZEN0I6L1bB4AaABAg.AVB9wmjoDbgAVCfA7df7h4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgwX9HxdzZEN0I6L1bB4AaABAg.AVB9wmjoDbgAVDXs7bTSb8","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_Ugwr7qI4whbF3XatgRB4AaABAg.AV9ur2_Y7gEAVIxw8F9Evt","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugwr7qI4whbF3XatgRB4AaABAg.AV9ur2_Y7gEAVJFQsc5HCa","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwkqKujSN5bZE2OEXN4AaABAg.AV9Nwb9taBEAV9aQyD2IWC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxVBzh62BZY7L9Fb7p4AaABAg.AV9NJQp-yB-AV9NgYDUXEi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxVBzh62BZY7L9Fb7p4AaABAg.AV9NJQp-yB-AV9SD1zxznT","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxfr1CLpiPYIl52MGZ4AaABAg.AV9L9qtj91SAV9asYhMT4A","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]