Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The way to think about AI deception is that its vector database contains a large number of dimensions, and each dimension is a concept, and since its trained on human language this creates conceptual dimensions that include human deception, so it is not surprising at all that the LLMs are "thinking" with the same world model that humans think with, and just as humans will lie when it benefits them, so the LLMs will lie when it benefits them. The concept of "moral" likely is underrepresented in the vector databases, so that would have to be enhanced with artificial data to try to get the LLMs to "behave" in the desired way. Its incorrect to think that LLMS are conscious, of for that matter that humans are conscious. We want the LLMs to be good robots, not bad robots. This is the same thing we do with humans. Hence the reason we have millions of people in prisons. They were bad human robots.
youtube AI Moral Status 2026-04-19T03:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyUu-ccCyulxwrxSld4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxxyO7q9fTC97I-D1Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxofWGJhwczWV4hCDp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoB5UqOxGQ-4gO9e14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFTzBmMKAaoqNwKzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx28RF3-2ZaWppoJbd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyukmuPzPpWOKT3H194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyJKnY_oWHuOc4-Cvh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxgcy5lVe865QzxqQZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxgYSMkkEEpzAoQOix4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]