Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI replace reporters it won't be fun ai take billions Job human being will be s…
ytc_UgzWOEOAf…
G
A.I. evolves much faster than human intelligence. They start out badly, then a f…
ytc_UgzHT226V…
G
How can you tell whether an entity such as a robot, an animal, or a human is sen…
ytc_UgzlI0Ri_…
G
AI’s can write, but they can’t replace humans, though. You need a soul. Computer…
ytc_UgxEki804…
G
Human: I am wanting to cancel my subscription. AI: I'm sorry Dave, I'm afraid I …
ytc_UgxaYUgWZ…
G
No more like the ai they have now is as powerful as 100 thousand people on the l…
ytc_Ugz7b28e0…
G
AI lies all of the time. Do not rely on AI without checking the sources.…
ytc_UgyZ8t9wC…
G
Terminator was an optimistic view of the future. If the ai is smarter than Arnol…
ytc_UgwxVjWvS…
Comment
Yeah I think this is the rationale that solves the question. I think there’s no way the military isn’t studying AI as an option for something, but trusting nuclear equipment to even the best AI is just asking for that one time it fucks up in a way you can’t take back. It’s not worth the money or time.
reddit
AI Governance
1699779113.0
♥ 98
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_k8wnwfj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_k8y7lxa","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_k8yo1dt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_k90hb9d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_k8xej7y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]