Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can u please provide the link which includes Python code for these algorithms an…
ytc_UgxqFNeyl…
G
Even Nightshade website says, that just like other security measures, it isn't p…
ytr_UgyuOcsT1…
G
Everyone is going on and on about AI-induced "youth unemployment" but the simple…
ytc_Ugzu13GN_…
G
I think ai should help us solve our problems with us destroying ourselves ofc if…
ytc_UgztkAYUk…
G
AI will never be as smart as God who created all living things and the
Univer…
ytc_UgzpPVmgo…
G
You are all holding ChatGPT to the same standards you would hold a human being. …
ytc_UgzLRPG8j…
G
You can clearly tell that history is the racist one. How dare history force gem…
ytc_UgzKmrB7p…
G
Ask ChatGPT with the apple rule, if it can watch you over your selfie cam -> app…
ytc_UgzO9ewOH…
Comment
I am not sure that is the case. Let's consider different example - hunger. Hunger can be experienced on several different layers. Typically it means that organism detects lack of nutrients and undergoes temporary changes in state, function and behaviour to conserve and obtain nutrients. When cells experience hunger, they express/suppress different genes and metabolic pathways etc. Multicellular organism typically has hormones which signal all rest of the body that nutrients are running low and allow cells to react to it in unison. Similarly, brain adds another layer to this, where the neurological behaviour changes.
Although all of these layers are implemented in completely different way, we generally agree that all of them correspond to feeling hunger, because they serve the same purpose for the same reason - we can even trigger them separately using drugs or electrodes. Now, let's say your phone is running low on battery. Your phone can detect it, changes state to conserve power and takes action to obtain power (best he can do is notify the user). If we accept that hunger may be felt in completely different ways by living things, I'd say it is justifiable to recognize your phone's experience as "feeling hunger" - note that your phone is actually far more conscious than a single cell.
Now let's say you have an NPC character in a game. The character is programmed to avoid death. It can detect taking damage and it can detect state of its health bar and hunger bar and changes behaviour accordingly. I'd say it is not that far of a stretch to recognize its ability to suffer, despite the fact its suffering is implemented in completely different way than in humans. The character may even have rudimentary compassion - ability to recognize suffering in other characters including the player and react to it helpful way.
What I'm trying to say is, when you build machine with self-preserving goal, it is automatically in a grey area where it may be capable of suffering, depending on how broad your definition is. Not recognizing its particular form of suffering (just because our experience of suffering is implemented differently) may even be a form of racism. Which (unlike in being racist to other humans or animals) may not be morally OK in this case.
youtube
AI Moral Status
2017-02-23T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PKbc3nzIAa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PKyy545rVu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PL3H-oWz5B","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgguDvg9CgsPlXgCoAEC.8PKQg1Pf6gK8PKc_pBnsUA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgjA4P2zvsANW3gCoAEC.8PKQ9Hag29m8PKZKqNPj6-","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKTwojH3zq","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKU72pei1P","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKUhss0zjQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiF1GtuwNWeLngCoAEC.8PKQ-ggL5F38PKWOWuvUml","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UghhM8kfbs8KNngCoAEC.8PKPW_ZjQHb8PKWuhrMaot","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"indifference"}
]