Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
artificial intelligence is programming. programming is typing on a keyboard. typ…
ytc_UgyYjQuvF…
G
So artist are upset that ai is taking their jobs?
Welcome to the club- ig when …
ytc_Ugz666puj…
G
Facial recognition wont work in 2020-2021 because everyone is wearing a mask. …
ytc_Ugzbaj702…
G
The apocalypse is now! jk. I don't want attention, but people need to open thei…
ytc_UgxBYclTZ…
G
As I wrote lower in this thread, you apparently have no fucking idea what you're…
rdc_esp4r26
G
Where should Europe get its fossil fuels from? There aren't many great options t…
rdc_ibf1ipu
G
Great post, Sabine. I agree. AI fails in its design, not in power. Humans buil…
ytc_Ugxo5yEqs…
G
just remember, if your job can be replaced by AI you didn't loose anything but g…
ytc_UgzzA4VzG…
Comment
>More generally - all concepts intended to represent reality are arbitrary, vague, and social constructs.
Maybe you should have used this (the more general claim) as your title, rather than the moral aspect (the more derivative claim). Because of this, any discussion is bound to be hopelessly confused, as your metaphysical claims are mixed together with your moral claims.
I'll start with your conclusion on morality, because I believe it contains a rather glaring omission. Then I'll move back to the arbitrariness of concepts.
>The algorithms will be like a set of tools with the understanding that each can have strengths and weaknesses and achieve different ends, but just like tools in a workshop, there is no one ‘correct’ tool - **it just depends on what goal you choose.** And once moral classifications are chosen, we can answer scientifically whether something adheres to it without being sidetracked by the question of whether a particular classification of morality is correct. With the understanding and acceptance of my thesis, we can transition moral philosophy into a science of morality.
(Emphasis mine.) This misses the point entirely. The whole point of morality is to decide on which goals you choose. Your algorithm is just a regular planning/control algorithm, no different from any planning algorithm we can make using known techniques. Overall, this paragraph says nothing else than "once we've all agreed about what is moral, we won't get sidetracked about what is moral". While this is true, this is also not particularly helpful. How do you propose we reach an agreement? On what basis? If it's arbitrary, *and yet people disagree*, we're just stuck.
At the end of the day, assuming we accept your thesis, we are still at step 0 of moral philosophy: how do we agree on the goals?
Back to concepts now.
>More generally - all concepts intended to represent reality are arbitrary, vague, and social constructs.
I don't think this statement really makes much
reddit
AI Responsibility
1495835792.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_di32txv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di38b7x","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di2wpnk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_di3r4u7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_fvw1s50","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]