Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
... Whether or not Einstein boy was told that.. kind of glad he was.. because I …
ytc_UgzFVMNAu…
G
Would you care about AI that much if you didn't need to worry about money? Like …
ytc_UgzgT9cWK…
G
I can't help but notice that all of the signatories are prime candidates to have…
ytc_Ugy7JuybN…
G
The real value of AI isn't creative thinking. Its cheap robot slaves to replace …
ytc_UgyFYXqIP…
G
Step 1 to escape the robot: by water
Step 2 to escape the robot: poor water…
ytc_UgzyI9j1W…
G
Rutherford Institute Launches Inquiry Into Government Use of Drivers’ License Ph…
ytc_UgwGkZRjp…
G
I know that there’s probably gonna be a lot of disagreement about this, but as a…
ytc_UgzeRiAOs…
G
The sad reality is that they probably even asked you to test out the ai and kink…
ytc_UgwTpXKkH…
Comment
Its barely science- anyone with a basic reading knows that while we have insights into the function of the brain and the creation of artificial intelligence, the fields are still as far apart as physics and chemistry in the 18th century- we simply don't have the data to fill in the gaps yet. Anything else is just speculation.
Our use of evolutionary modeling is just that - modeling the iterative systems. We don't hear people talking this shit about using evolutionary modeling to show how rivers wear down canyons. People like to port high-minded concepts like selection pressure without study of why we apply them or why we need to rigorously test areas where our brains over-apply pattern recognition.
Its barely philosophy too- it fails to give foundations to its discussion, launching into incredibly complex and contested subjects immediately and relying on the inherent 'fuzziness' created to disguise transitions between ideas rather than provide actual backing to them.
Its fairly trivial to say, 'we are using this modelling (evolutionary modeling) on these concepts, aren't they similar' and 'these concepts all exist in the same context so they are related, right?' without actually adding anything to the discussion. There is no system here to test its hypothesis, no structure to stress-test its ideas, and its grounding is mostly referencing others work and expecting people to infer the rest.
I'll call it how I see it- empty restatement of existing concepts for their emotional component rather than anything novel or worthwhile. It barely achieves its goal in placing these concepts due to its lack of grounding.
reddit
AI Moral Status
1499620662.0
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fvonmsg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_hsnvqse","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_hsmeic6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_djzl4ik","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"rdc_djzozwl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]