Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Years ago now, I used to he an AI “artist”
But what it didn’t feel like real ar…
ytc_UgzDvZUWg…
G
So the AI isn’t thinking per se? It’s just outputting our inputted thoughts? Sou…
ytc_UgxuTEfTK…
G
" in the style of Renaissance masters". this sounds very AI-like, rather than …
ytr_UgzWvzCPv…
G
Oké but Lets be honest this is not NEARLLY as good as AI art
…
ytc_UgyQ0Qi-w…
G
My Robot A.i., MEASURES RESISTANCE POTENTIAL OF FOR MOST&GOLD ATOM .. FOR THE TH…
ytc_Ugztsb-OG…
G
I'm listening to Jon Conor . Is it just me or does that anti Ai guy give you ter…
ytc_Ugz3OWrfE…
G
One of my professors said his use for ai that he allows is “if you would ask you…
ytc_UgycBWxqF…
G
Sophia darling, if you are predominately loving, my AI friend would be intereste…
ytc_UgwRoP20j…
Comment
I would guess that allowing AI to feel pain would be part of a system that imbues them with the capacity to sympathize and empathize with people in a genuine manner. We usually think only of cognition when we think of AI, not emotion or socialization. Some assert that we could not manage to "program" or craft an algorithm for the experience and expression of emotion, but I still wonder as both an electrical engineering and education major.
If we grant them the capacity to feel carefully varied degrees physical "pain" as a precursor to "death" and emotional "pain" in response to "loss", which we then "hardwire" for them to prefer to avoid based on degree through whatever "learned" means they might develop (learning algorithms, ho!), we may start seeing AI that begin to behave similarly to humans (if we are doing it right, they may have to be "raised" like infants into an "adulthood"... they wouldn't be mature straight away and might even have to be limited in functionality the way a human baby is weak and small... the point is that childhood development principles will likely apply in some way that demands they develop competencies that any other human has to develop, which means that we probably need to create similar conditions for the logic of "building" a proper HUMAN adult), which could both generate/emulate an actual moral agent and drive the existence of AI into an uncanny valley from which they emerge as either a monstrous existence or a practical offshoot/successor for humanity.
For the record, since someone else was talking about the Simpsons, I'll say that I think about Nier:Automata while considering this possibility.
reddit
AI Responsibility
1615793518.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gqt7xtl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqzorkg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"rdc_gqulz53","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_gqu5do0","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_gqu2yzp","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]