Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Laws preventing AI to develop in certain ways, should have been discussed by gov…
ytc_UgzTpc7V_…
G
I mean, that's basically how a lot of women are looking these days but... 😅…
ytc_Ugys_WGRI…
G
@rockstar8573 not smarter, but trained. You seriously don't believe an AI can f…
ytr_UgxixpkWg…
G
Ai artists saying AI is inevitable like fucking movie characters just proves tha…
ytc_UgytoeLMH…
G
Its a chatbot trained in historical data. If you train it to answer that it has …
ytc_UgxrU2ZrW…
G
Permission is not needed for transformative work, but you do need permission fro…
ytr_UgxAHwoxj…
G
The point is that these things are being driven forward by 0.0001% of the popula…
ytr_UgyttQf3L…
G
@matyldasetlak9378 If that’s art A.I. art is too, you spend time by waiting for …
ytr_Ugzc2d9t0…
Comment
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must.
______________________________________________________________________________________
Two important considerations--
>For example, he said, AI could help identify all the resources a nearby hospital has — such as drug availability, blood supply and the availability of medical staff — to aid in decision-making.
“That wouldn’t fit within the brain of a single human decision-maker,” Turek added. “Computer algorithms may find solutions that humans can’t.”
and
>Peter Asaro, an AI philosopher at the New School, said military officials will need to decide how much responsibility the algorithm is given in triage decision-making. Leaders, he added, will also need to figure out how ethical situations will be dealt with. For example, he said, if there was a large explosion and civilians were among the people harmed, would they get less priority, even if they are badly hurt?
“That’s a values call,” he said. “That’s something you can tell the machine to prioritize in certain ways, but the machine isn’t gonna figure that out.”
reddit
AI Responsibility
1648679512.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_i2wwksr","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_i2whbf1","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_i2utbur","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_i2v5jm5","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_i2rwc60","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]