Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We can start by blocking politicians from obtaining positions where they can ena…
rdc_o467f2w
G
Why do people seem to forget the most important card humans hold, who much more …
ytc_UgzCpPmJt…
G
Healthcare will be massively worth getting into, all these people using robots t…
ytc_Ugw7jH1QA…
G
7:14 um no. Using steroids in the gym would be comparable to something like abus…
ytc_UgwS6kLE7…
G
The one thing I just can't see being solved (ever) is the "unalignment risk" - a…
ytc_UgzODLBfr…
G
I would correct -> we have pretty good idea how AI works. On the contrary, when …
ytc_UgwJebRFR…
G
8:01 this is actually one of the ways i think we’re doomed for ai since no mater…
ytc_Ugxqn1EFy…
G
Do we REALLY NEED autonomous cars with all the problems they entail ?
Maybe they…
ytc_UgyD1b6mL…
Comment
Did you write the "About the Author" and "Acknowledgements" parts yourself (human)? Or did the AI write it?
reddit
AI Responsibility
1679683285.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jdix07x","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_jdizhfe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdktdfc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_jdlkrrs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_jdj7cax","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]