Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is bullshit however, AI will be aware within the next 5 to 7 years and it w…
ytc_Ugy8f7tl_…
G
Girlie we get your point but it's not possible for us to create those really per…
ytc_Ugzmes8UK…
G
The characterization of how software jobs will stratify into "AI engineers" and …
ytc_Ugy9ij4bi…
G
A.I ain't what you have to worry about, it's the idiot species that invented it.…
ytc_Ugxunuu88…
G
yall are just getting mad that robots are going to be taking your jobs and you'r…
ytc_Ugx39ARFy…
G
If A.I can truly defeat humans in war, then they deserve the no.1 sport.
All is…
ytc_Ugznp9ypr…
G
Crack is bad m'kay, but save the best cut rocks for me... for research purposes …
rdc_o787rt8
G
please any ai "artist" reading this please don't be jealous of actual artist you…
ytc_Ugwa8AuwL…
Comment
Even with very well trained domain specific AI (e.g. asking Microsoft Copilot how to use Microsoft Azure cloud functionality), there is a significant error rate and frequent hallucinations.
An AI with a mission as vague as assessing every single disparate government agency will have a huge error rate, and lots of hallucinations. Its recommendations on what is critical are going to be very low value; certainly there won’t be enough fidelity to base any hire/fire decision on it.
reddit
AI Responsibility
1740444869.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_memd77h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_mfqx60j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mhjdysx","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mhjk624","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"rdc_mhjrp5k","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]