Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have zero background in anything so this is just my own fledgling thoughts into the matter to test the waters of how well I understand this. 1. How do we test the intelligence of humans? 2. This is the one I'm interested it. AI have no reason to program the way we do, so to say it even could adjust our initial source code is a stretch. It may be able to replicate the functions and create another AI like it using it's own methodology. The reason I say this is a program doesn't know all the rules about programing we do, specifically that fundamental programing relies on boolean states of on-off or 0-1. Take [this](http://www.damninteresting.com/on-the-origin-of-circuits/) article for instance. A program was given the chance to write a configuration file for Field-Programmable Gate Array chip and ended up abusing flaws in the specific chip to accomplish it's goal because it didn't know any better. A self programing AI would probably do something similar in that it wouldn't be able to read or make sense of our programming and we wouldn't understand theirs. That said, it'd have to replicate itself first and in doing so it would have full access to remove programing and features. 3. Why would it? Self-preservation is a evolutionary imperative due to our deaths being permanent. Early injuries would usually lead to death so harm is generally avoided. An AI might even self-terminate when it feels it no longer matters. Unless the digital equivalent of addiction existed for it to constantly seek out. 4. If you can give an AI a bit of information and that AI can formulate an estimate of what percentage that bit of info represents to the whole (even if it's wrong) it shows that it's aware of a situation larger than what it currently has knowledge for. (It understands the concept of questions to ask based on questions it has answers to). 5. See 3. 6. Not my question to answer.
reddit AI Bias 1438044149.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ctho8la","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_cths3wu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_cthvmen","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ctifmmc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ctho3gc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]