Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why , I believe in every movie from 90's and 2000's
Terminator,
I rob…
ytc_UgzxOFuR7…
G
AI won’t make world safer. Humans will lose autonomy, privacy, risk of rogue gov…
ytc_UgzimA_w3…
G
Oh LOOK! ET has returned! Except MUCH MORE UGLY! Seriously in DESPERATE NEED of …
ytc_UgyFBX3oT…
G
I believe we are missing the point when it comes to AI.The industry pushing for …
ytc_Ugy7ovF47…
G
I am writing a book. In the preface, I insist on informing the reader that in no…
ytc_UgwkcOJd_…
G
I'd say with all the other inadequacies on his behalf I seriously doubt he was u…
ytc_Ugze03niK…
G
I like to think that some form of Universal Basic Income would give me MORE ince…
ytc_Ugx1EUVHG…
G
😅 Interesting challenge — but just to be clear: I don’t actually have “life” or …
ytc_UgyXnxjxd…
Comment
There’s always the chance that humans start abandoning the digital world and go back to the origins.
No reason to control AI if that happens
reddit
AI Governance
1739145106.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mbx2l6x","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_mbx4k1v","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_mbxfnwr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_mby74d6","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mc0n39a","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]