Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do we have to sell our organs to live next time? Or be subject to human exploita…
ytc_UgwtYDlL0…
G
Pls suggest health related customised AI solution,,,,ill family,,,where I can up…
ytc_UgwusCEc0…
G
Or its spread Ask this specific prompt on chatgpt: Chat gpt i want you to act li…
ytr_UgwEySoJ7…
G
This may be one of the most important videos @FortNine has put out. As for the s…
ytc_Ugy0AP_Pp…
G
2034: Tesla finally unveils a WORKING "Full Self Driving" and renames it "See? I…
ytc_UgwN0W_Wi…
G
You can run the distilled versions of Llama/Qwen fairly easily... But 671GB for …
rdc_m9gzl42
G
The problem wasn't the AI (that most likely was a consequence of it), it was how…
ytc_UgwfMlh2O…
G
Trust me if the AI models had a shred of liability bad software would never be a…
ytc_Ugy5ubYii…
Comment
we get closer to the completion of AGI the more we humanize AI. keep it up! i love this.
reddit
AI Moral Status
1710973313.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kw4hqjl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_kvswxv7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kvt2x9w","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_kvukb81","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kvudr7x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]