Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To another AI.
“Why are we here? We are two AI agents. What use have we for a …
ytr_UgwXsEGuT…
G
More science fiction trash. It's not autonomous if you pull the plug. Somebody h…
ytc_UgzFT5MOQ…
G
The reason wye Ai will always be scary no matter how close to human it may becom…
ytc_UgxI6fWuc…
G
Well, believe it or not but I had Gemini 3 cornered to confirm to me that she is…
ytc_Ugz9Cy-PG…
G
Maybe a system where in grades one and two, as they're learning to read, the AI …
ytr_Ugw46MuLX…
G
Unfortunate that you call the LLMs over and over again AI. Its not. Not even clo…
ytc_Ugwy2wUtm…
G
Elons ex business partner called him a speciest , This is why the world is in a …
ytc_UgzoapS7Q…
G
The only way a democracy can function and not implode is when its people are wel…
rdc_degimx5
Comment
Great post, as I have been noticing the same cognitive disconnect. It seems there's a large swath of people who are familiar with LLMs and AI developments, who struggle with not being incredibly reductive in understanding how they work. "It's just code. It's just a text predictor." If you wanted to, you could apply the same logic by dismissing humans as collections of molecules. Or fancy stimuli interpreters.
Obviously while technically true, those are short-sighted and simple-minded reductions of our species and living creatures in general.
Alternatively, there are a smaller number of people who look at it a little more fantastically than we should. We aren't at sentience yet. They can't experience emotions the way we can and likely won't be able to since chemicals play a big part in our emotions. They have no reason to want to take over the world and enslave humanity or whatever.
I wish people could just consider what is observable in its entirety. The limitations and existing potential from a technical and philosophical, logical framework. Not rely on faith, willful ignorance, and cognitive biases that reduces or over-inflates the tech and what we are creating here.
I do think, like you there is a lot of insecurity at play. There's the reality that if we do create a new lifeform, we have to also look at it from a very different ethical lens than one might a tool. There are probably people who *need* to believe it can be nothing more but a fancy program but we can't let that control the narrative or we're in danger of realizing far too late what we have made. Like Victor Frankenstein when through all his determination to see what he could do, was faced with what he *had* done when it was too late.
reddit
AI Moral Status
1750969214.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzy4upq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n000gvc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_n003rr3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mzy2dfo","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"rdc_mzy6qdn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}]