Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing new,we have some company using ai to call and sell photovoltanic panels,…
ytc_UgwvWB57P…
G
It's hard for people to be honest about what the future of AI is when their ment…
ytc_UgzNcU3Oe…
G
No. AI is not using any of the original work, AI learned how to draw.
About usin…
ytc_UgzXrrpQP…
G
why is this even a debate??? No mater how convincing an AI may be, it will never…
ytc_Ugi6zFhOr…
G
Humans born in to a world of more advanced A.I. and robotics 30 or so years from…
ytc_UgwUaQJ7x…
G
what if AI solved poverty and hunger, and it only took the money that already ex…
ytc_UgxegWsSq…
G
I'm pretty much sure it's not a concern as impoliteness on the AI side is/will b…
ytc_UgyKsqbAi…
G
It’s the hacks who write scripts for reality tv shows or Hallmark movies etc. th…
ytr_Ugz5ZTah8…
Comment
I'm an attorney who is a full-time professional legal writer and I ASSURE you that partner is a fucking moron. Partners ar prominent law firms being fucking morons is not a new thing, I'd say you can usually find at least one ar any firm with more than 20 partners. You can also find judges who are fucking morons, as demonstrated by the fact that AI-generated slop has made it into published opinions a couple of times this year. If Andrew Yang is buying this he's a sucker. I've encountered the legal writing LLMs do and it's literally worse than what a pro se litigant could produce with no knowledge of the law. The logic LLMs apply is of a very superficial kind based largely on how frequently words appear together in the data sets they train on. They have no "understanding" of WHY those words appear together frequently or of the concepts that underlie what they are "saying." For some purposes that doesn't matter at all and you can get pretty impressive results- they're very good at, say, summarizing news articles. But there's an underlying logic to legal arguments that they absolutely do not factor in, and as a result what they produce is total nonsense. They famously "hallucinate" citations to non-existent sources, misquote sources, and make up legal principles that don't exist or are inapplicable to the jurisdiction or the case they're used in..From what I've seen they're also very likely to confidently produce results that the person using the LLM.wants to hear when the actual law is not on their side. Using an LLM for legal pleadings is akin to using it to produce a.design.for a bicycle based entirely on the statistical analysis of words used in a data set consisting of other bicycle designs, when it has no concept of physics or mechanics. It can make you something that, at first glance, looks an awful lot like a design for a bike. But if you try to actually BUILD the bike it will immediately reveal itself as total garbage- it will invent bike parts that don't exi
reddit
AI Jobs
1753645085.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n5hoibr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n5hi9e1","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_n5gfutu","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_n5gl59f","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"rdc_n5glbp2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]