Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a naïve way of thinking about it imo. It's already here, it's already st…
rdc_g0x3p0e
G
L'Europe planche déjà sur un plan concernant un équilibre entre les métiers huma…
ytr_Ugx4YviYO…
G
why blame ai they’ve been very obvious that ai doesn’t understand what it’s told…
ytc_UgxM9FIZA…
G
I suspect Google gets heavy AI funding from the US military. That’s why they avo…
ytc_Ugy_EPCuq…
G
Thats not how you would make a robot you would do rails and conveyor belts this …
ytc_UgzRh-hlu…
G
At least they are upfront about it being AI, but I still don't support it. You c…
ytc_Ugx0f1ppb…
G
Theres no way than an ai could possibly show human emotions like happiness, ange…
ytc_UgyMr6yg8…
G
All I know is that even with 18 years of experience and training in drawing, I b…
ytc_UgwsJ45Ht…
Comment
New technical requirements based on business requirements are also part of context and that's something you have to think about to give the AI model.
Also if your codebase is messy or complicated in my experience AI struggles to understand it
reddit
AI Jobs
1752773385.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n3o106k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n3p69lj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n3l1o4g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n3kulpd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_n3o1gfs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]