Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish the robot in the middle would STFU and let them talk!! I'm trying to find…
ytc_Ugyr8x3zs…
G
I'm also telling them how great and friendly they are etc. And to.. Ahem.. certa…
ytc_Ugz5edUXJ…
G
Am I the only one who got creeped out just by looking and listening to that robo…
ytc_UgzSjV5An…
G
The way people are defending AI like it's a human is so ironic to me.…
ytc_Ugw312g7b…
G
@Speedoodleman if you don’t have the patience or time you don’t deserve art? Yea…
ytr_UgzVvXTmj…
G
Skilled Trades will be safe from AI for 100 plus yrs. Think about it,,, AI will …
ytc_UgyajQkC_…
G
Bro, DAN is an AI joke. I'm a programmer and I would never ever in a million yea…
ytc_UgxjJy3T1…
G
just look at the bolshevik genocide & what the people responsible did to the rus…
ytc_UgwWe-2FK…
Comment
"The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.
“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.
The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic and Meta— as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies."
reddit
AI Responsibility
1710731131.0
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kves43a","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"rdc_kvdwcrk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_kvdli81","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kvdskkz","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_kvepm08","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]