Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At least you will agree that, except for some rare situations, you can only get …
rdc_g9u2otp
G
What I resent.....is my electricity bill is getting higher (substantially) .....…
ytc_Ugwh4-bDy…
G
You dont need AI to kill innocnet civilians. You need AI to identify Combattants…
ytc_Ugxd6HZ5P…
G
They want to have Driverless Trucks...... Ok but what if a Child runs out in fro…
ytc_Ugyf3T0EX…
G
I think AI needs a union/democracy that’s not pigeonholed within these companies…
ytc_UgynTd5bh…
G
The blah blah blah here applies to basic newbies devs yeah AI can easily replace…
ytc_UgwR2ORzL…
G
I remember seeing the Teamsters endorse Trump (with Elon standing behind him) an…
ytc_UgzZx36sW…
G
Hi Charle, after I got the answer from Chatgpt, I copy the text but it is insid…
ytc_UgxVnSORf…
Comment
Those are good points, but I still think you are seeing an AI that's acting on it's own wants. A machine doesn't want anything, it responds to humans wants and needs.
My take it's that the technology wont be the problem, humans will. If a human asks a computer to save the earth, but doesn't create a command saying that killing humans is not an option, that's a human mistake, after all.
It's like a nuclear power, it is capable of creating clean energy and save humanity, or of mass destruction, accidents might happen if we are not care enough, but in the end of the day, it's still a human problem.
reddit
AI Moral Status
1655294512.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})