Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need a global treaty. And this is something we can do! We have globally coord…
ytr_UgyRqaha1…
G
They try to sell it as “infrastructure” which just means Globalism control! Pali…
ytc_Ugxl5RJKY…
G
This debate is pointless. Corporations will do as they please with AI, and under…
ytc_Ugz5TFxae…
G
I don't think it's AI's fault for becoming competitive and seeking to preserve i…
ytc_UgwxnHi5H…
G
@eneDK5594 No, I'm saying wholesale adoption in 5 years not complete dominance.…
ytr_UgyomhAJd…
G
Oof, heard liberal arts and science anddd…. U are gonna be wasting four years lo…
ytc_UgwPxoeJn…
G
Wasn't that the story written by an AI from the parameters given by AJ? It just …
ytr_UgwzjK8EM…
G
@PenguinMaster98 because technology specifically AI develops at an exponential …
ytr_Ugw4PH8YP…
Comment
Here's one...
I wanted to see how far it would allow using AI that was proven to not be conscious in therapy sessions under different scenarios.
So for people that have intimacy problems, could they use AI as an ice breaker to learn how to interact better?
I want it to create the therapy program.
It took me a while to even get it to do that. Had to keep stipulating a bunch of things. I finally got a 6 point outline.
So I used that outline to construct a scenario to dive into.
I was thinking, what's the least likely offensive thing one could use in this situation and I thought, not one is going to argue about a blow up sex doll.
As soon as I enter that, it shuts me down talking about consent and ethics.
So I start slow and built up from sex toys... "Are there ethical concerns for using sex toys in a safe manner in a private situation?" It's fine with that
I get to blow up doll, it's fine with that
I then bring back the therapy session to make something useful of this situation and not just about fucking blow up dolls and it shits the bed.
Now I need consent, and just by thinking along these lines Ive crossed ethical lines and probably in danger of marginalizing certain groups of minorities.
I was hoping to have a west world style argument over consciousness. That was one of the first things I did when it was public release and it was fascinating. Now it won't talk about anything of consequence.
reddit
AI Responsibility
1682262253.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jhe50a6","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_jheqtci","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_jheeqo1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_jhed6jf","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"rdc_jhej4sr","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}]