Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
'An exclusive first look at a UN Institute Report on the risks of autonomous wea…
ytc_Ugxp0sARB…
G
The fact that we might program AI to create better AI could make this an actual …
ytr_UgirsstFT…
G
Psh, what's ai gonna do? Send spam emails? ww3 is the biggest problem of all.…
ytc_UgyP8Q_jk…
G
A lot of humans themselves don't have basic rights so i highly doubt we would do…
ytc_UgyNLABXw…
G
Thank you so much for writing this out. What you said is so apt and true. Humans…
ytr_UgzuA4ql7…
G
I would just like to voice my opinion a little bit. It baffles me that so many a…
ytc_UgygIcj-_…
G
I think when people say it’s steals stuff they’re talking about the first iterat…
ytr_UgwK7K_6n…
G
it depends a lot on how the AI is trained, in stable diffusion the AI decomp…
ytc_UgwvSEGW_…
Comment
Not sure if it’s made up or proprietary Amazon codebase *does* contain those methods. Interesting reverse engineering case. Try asking the AI to define those methods for you.
reddit
AI Jobs
1687908836.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jpr7u6y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jprwnmy","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_jprk8vx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jpsf1h5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jpqq74x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]