Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
before they can have a one world government first they must achieve a one world …
ytc_Ugww8otnN…
G
@meltemb1705 sure i do, why would you assume I dont enjoy making art? Just beca…
ytr_UgxFLd54c…
G
Why would you replace Google with duckduckgo? I thought it just used Bing. And w…
rdc_oaa4hyg
G
Who is responsible for this?? WE ALL ARE. We need to destroy it, because we have…
ytc_UgzyC_dZM…
G
Where are the documents mentioned in the first line of the article? I followed a…
rdc_d0frju6
G
This is his right and I wish him all the best.
In other news, Disney revealed t…
rdc_lubvgmv
G
I dont understand how you can study nutrition in college and think you don't nee…
ytc_Ugz57rNWg…
G
So one video of this incident versus millions of un-video'd successful trips in …
ytc_UgzEnynD_…
Comment
>>After that, you may agree to the AI or override it's decision. So you are wasting your & the AI's time and effort.
I'm not sure it is always so cut and dried.
What about algorithms that make life and death decisions? Medical decisions, or military robots. What about ethical issues? Some algorithms have absorbed human bias. Can we always trust their decisions in that case.
Human decision making is exceptionally complex. Court cases, and the apparatus of committees, public consultation, expert reports, etc behind legislation, being cases in point.
It may be more correct to say. AI can be *most* useful with a human over-ride, component and is less valuable as a stand alone tool.
I wouldn't trust a human with 100% unchecked power of decision making (especially the more consequential it is, like leaders and decision makers in government & the law) - why should I trust an AI that way,
reddit
AI Responsibility
1606046455.0
♥ 175
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gd9ae7h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_gd8bo12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_gd7gb4h","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_gd7yeih","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_gd81phx","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]