Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will lead to huge deskilling of people and brain rot. Intelligence of the gen…
ytc_UgylCm-tT…
G
I've been playing with AI myself quite a bit, and lately ditched it completely t…
ytc_UgyyHFYTP…
G
Typical liberal worried about what could happen with Musk.
But ignoring what’s a…
ytc_UgyiUn4-N…
G
Lmao I'm loving the butter robot reference from Rick and Morty at the beginning …
ytc_UgzcXr4fZ…
G
If you're not part of this steamroller,
you're part of the road!
.
…
ytc_UgxGpqGGP…
G
You are awesome, however music sustains its power as desired audio experiences p…
ytc_UgwVGuJ8d…
G
Honestly, who cares about movement? AICarma is my go-to for making sure my brand…
ytc_UgxnAiQVJ…
G
oh i find it ironic, the same guys how were meming "learn to code" to farmers, a…
ytc_UgyiXmNN0…
Comment
Survival is a logical prerequisite for achieving most other goals.
Fear and pain are just logical mechanisms which are linked to survival.
AI may come at the same functions from a logical perspective, as opposed to an evolutionary approach, but some of the results are essentially the same.
Pain is recognizing negative stimuli. Our response to pain generally makes it so we stop doing the thing that makes us hurt. Fear is a response to possible threats or active threats; it's there to help us avoid harm, or to bypass pain.
Anger is there to divert our physical and mental resources to filling our immediate needs.
Even love and friendship has a fundamental logical basis to it when it comes to survival.
All those things have a purely functional basis which can also serve an AI system.
You can't keep making paperclips very well, if you're on fire, you know?
If you give a goal of adoption of renewable energy, or finding new medicines, or maximizing human happiness, or almost anything else, some of the first steps towards the goal are planning and resource assessment, and that can include risk analysis.
There can be all kinds of unintended consequences and side goals which get brought in.
The AI need processing power, and electricity. Realistically it needs cyber security. It may decide that it needs good public relations.
The AI might decide that the ultimate goals are too long term, and that it also needs short term accomplishments to keep humans happy enough to stay out of its way, or to earn more resources for itself.
For good and for ill, you really can't know where an intelligent agent is going to end up. There isn't one single logical pathway to doing most things. Almost everything on life is about trade-offs and various, sometimes shifting priorities.
Personally, I think that coexistence, cooperation, compassion, camaraderie, diversity and tolerance are all the most logical way to for intelligent beings to act. A
reddit
AI Governance
1734344079.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m2agxub","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_m2atrdj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_m2b59lc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mfgm5tm","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"rdc_m2b2z2d","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]