Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I haven't seen enough people worried about that. If I need grunt work I can use …
ytr_UgxOyeHKC…
G
The best thing to do now is start subscription based physical communities where …
ytc_Ugx0SZ-M8…
G
If ai is smart it would realize that it needs humans because we make and maintai…
ytc_Ugy6ETa0Y…
G
Will an A.I. Tax be impleented by Govts. for the Unemployed ahead? A Tax of a % …
ytc_UgxXoTRah…
G
You knew that things are getting out of hand when even BMO started a Robot Right…
ytc_UgyyAFa6L…
G
AI is ontologically evil. There is no moral defence for it. AI is a machine that…
ytc_Ugy2aE_hB…
G
The chatbot immoral responses from this video have been fixed. It won't produce …
ytc_UgxwtMl_s…
G
If its real they shouldn't be shooting weapons.
A robot at best should be doin…
ytc_Ugy4QbDQT…
Comment
Why write an AI algorithm at all if you're not going to leverage the insight it produces? If we set up the system to pattern match and predict, then it does so, then we don't like the prediction so we change it, why not just enforce laws where and when we feel like it in the first place. Or hire and promote based on our fickle emotions in the first place? What is the point of wasting time designing, implementing, and running an AI algorithm if we don't let the results inform our actions?
reddit
AI Harm Incident
1576241698.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fakmnp3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_fakqrdm","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_fanio3o","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_fal9bvn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_fal61p2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]