Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All good questions: the idea is that even though you have a desire for, say beauty, that is an inclination, not knowledge. As an example, it takes a long time for you to "know" that you don't like pain, even though you don't like it from birth. To "know" it you have to develop a concept of "pain", how it is different from hunger, an awareness that it comes from your body, etc. In fact I'm not sure it's as simple as having a desire for beauty either; what we call "beauty" is an outward expression of a more complicated inner drive. Beauty is a "banner" under which we rally others to our cause - i.e. "This is beautiful! Appreciate and value it with me!" As for truth, I've argued in a few places that people actually don't care about truth as much as they think they do, or that they say they do; (see [*Truth is always an afterthought*](https://ykulbashian.medium.com/truth-is-always-an-afterthought-cae8385e3bd3) and [*Logical consistency is a social burden*](https://ykulbashian.medium.com/logical-consistency-is-a-social-burden-bbd0c947e591)). We even define truth through the other, as a means of collaborating "objectively" with them. When we're on our own, however, we bend the rules of "truth" as often as it suits us. We need others around us to keep us in line, and logically consistent. Regarding the question of who "you" are that is doing the perceiving, whether or not there is a you (and only one "you") is arguable. But taking that as a given, the you that is perceiving still need not know that it exists. As you implied, this is an epistemological argument. As an AI researcher I am more interested in the sequence through which beliefs form, and the reasons they form. I am very much against the idea that beliefs are innate, even your belief in your *self*. And as I investigate the issue I realize that in order for an agent to believe they exist, they must first believe that others exist. Hope that answers your questions
reddit AI Moral Status 1776119991.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jmlsr32","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_e6doewb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_og0xi5i","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_ogs05jk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_og10jcz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]