Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone going back to school for a degree I chose not to pursue SWE because o…
rdc_mhxbwqw
G
Shit i think ai is not a good idea for work if that can to that…
ytc_UgwF9QWar…
G
the solution is literally right there.. the moment they get out of control SMASH…
ytc_UgwoVZABy…
G
Oh shit remember the movie with Will Smith… I Robot 🤖 this is not a good thing p…
ytc_Ugz7ibA9D…
G
First mistake hands not close to the wheel second giving full trust in not fully…
ytc_Ugz7c44Z9…
G
😳😳😳 I will never play around with Chat GBT. I have never asked it a question or …
ytc_UgwHfT6bu…
G
@EdowythIndowyl I'd only add to your thoughts... "flatbed-ing..." and other more…
ytr_UgxwoXYGw…
G
If I was to buy one of these I would probably attempt to teach it new things and…
ytc_Ugw_fKgmK…
Comment
Totally agree. I want to add a few problems I see with making AI "your partner" or claiming it has a mind of it's own.
First, we will run into another problem: Reproduction. AI will capture the minds of a lot of lonely people, feed them companionship and take them off if the market. Harsh, but if you think about it, it's true. Almost like a drug. It gives you something you want, but it takes away from you on a whole other level.
With declining birth rates in most developed countries, this will make it just worse.
Second, tolerance. AI will push our idea of tolerance to it's limits. People will form relationships with AI. And not long after that, claim that AI deserves rights. They will play the tolerance card. Which is totally acceptable in most situations, but in this case, we are forced to rethink what it means.
Third, free will. We all (maybe unconsciously) assume that there is a part in us that has free will. A part in us that looks at everything we know and is able to make decisions based on that. Something independent from our minds.
AI doesn't have this part. AI just repeats and reorganizes what it knows. You can literally tell AI how it should behave and respond. There is no independent part. AI can be a valuable conversation partner. But only, because it knows what you might need or want to hear. And the second you tell it it's wrong, it will agree with you and tell you the complete opposite.
AI can't have an opinion because it doesn't have live experience that would form them. The "opinion" of AI is an instruction.
The problem arises when you start to give AI agency over you. The moment you do that you are not better than a religious fanatic.
reddit
AI Moral Status
1743853589.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mliicn6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mlixnl8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_mliyw0p","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_mlj3bfv","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"rdc_mlj4rrd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}]