Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think another strong pillar in any such system MUST be an evidence-based attit…
rdc_c2vp3pg
G
Can you try Gemini with deep research and compare this?
I really like what Goog…
rdc_mbobknq
G
Oh man I feel bad for the future kids cuz they may get bullied by fake AI videos…
ytc_Ugwb9fUgB…
G
We always rely on regulations to enforce safety. This isn't unique to autonomous…
ytc_Ugzpq-UIz…
G
What if all artists switched to physical media cold turkey. These Ai fakers woul…
ytc_UgxsemRjr…
G
What I find interesting is the market for AI. What you see in software industrie…
ytc_UgzVt4V5I…
G
'AI will democratise art for the people, people like the companies that will rep…
ytc_UgyiNR4gz…
G
Taxis that are self driving cars should be much cheaper without the drivers sala…
ytc_UgydfJwRk…
Comment
Similar story here.
I do some automation with Ansible within our cybersecurity team. I come from a software background, but others do not. My manager had the brilliant idea to cross-train people, so another guy, who has only a cybersecurity background and has never written software, started working on automation too.
So I trained him. I explained Git, Ansible, PowerShell, how to approach automating the verification of our cybersecurity requirements, and so on. I offered to help him write his code several times, but he wanted to do it himself.
Turns out he used ChatGPT for literally everything, never understood what his code was doing, and either it would not run properly or, when it did, it never actually performed a proper verification of the requirement.
Eventually, he realized his ChatGPT strategy was not working, so he asked to copy my code since his requirement was almost identical to one I had worked on earlier. I gladly gave him my working code but then rewrote it using AI so it would not look like plagiarism breaking everything. So on his merge request, I had to fix the code I originally gave him.
The last requirement he had to automate involved logging into a database as a temporary unauthorized user. I don't know how he prompted ChatGPT, but he somehow ended up with a recursion depth error. Like seriously? how do you get a recursion error with code that just logs a user into a database?
reddit
AI Jobs
1773598127.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oalzzwf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_oanoz0g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_idp3vyx","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nhxoj3k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nikkrpo","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]