Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From a technological perspective this test is a little misinformed, in my opinion. The UI (which is probably just a command line or similar) is almost certainly not a part of the language model, and the AI would have to have discovered and exploited some serious security flaws to make a red dot appear. To put it another way you could give me (a human being with a decade's education/experience in computer science and machine learning) the same tools the AI has to manipulate this UI and I almost certainly could not make a red dot appear. Does that make me not conscious/sentient? It's also a touch difficult to talk about what a neural network is "programmed" to do, but perhaps I'm being pedantic there. Unfortunately I also can't think of any *better* tests at the minute, but you could certainly ask similar things of the AI which involve less asking the model to hack things. Spontaneously refusing to answer prompts, for example, would require the model to only express control over its own workings rather than manipulating an external environment.
reddit AI Moral Status 1655332340.0 ♥ 55
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_icgvrhp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icigdnu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icjng2o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_icgbv2o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icirgvs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]