Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sadly every currently effective AI is owned by some of the most evil people on t…
rdc_o1w4q0g
G
This will age well. Bookmark this video, and come back when the AI circus collap…
ytc_UgzZYw1Fu…
G
One question. If AI is going to replace the working class, then who is going to …
ytc_UgxnRlPnM…
G
So his main risk of AI is the fact that capitalist will get a hold of it. The ma…
ytc_UgzYM8Nwy…
G
This is way too common for this supposedly modern era.
People are raised in re…
ytr_UgwiPjfLZ…
G
Thought this idiot was going to call it “woke” 🙄 because he most likely sends hi…
ytc_UgwvARqAI…
G
Self-driving has come a long way, but still some kinks to work out. Hopefully, i…
ytc_Ugy9Pomo3…
G
AI in very specific areas like encryption is an arms race. Image generation? Not…
rdc_kyz5nb6
Comment
"There’s no stuffing AI back inside Pandora’s box—but the world’s largest AI companies are voluntarily working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far."
"First in science fiction, and now in real life, writers and researchers have warned of the risks of powerful artificial intelligence for decades. One of the most recognized references is the “[*Terminator* scenario](https://www.thestreet.com/technology/bill-gates-addresses-ais-terminator-scenario),” the theory that if left unchecked, AI could become more powerful than its human creators and turn on them. The theory gets its name from the 1984 Arnold Schwarzenegger film, where a cyborg travels back in time to kill a woman whose unborn son will fight against an AI system slated to spark a nuclear holocaust."
"This morning, 16 influential AI companies including Anthropic, Microsoft, and OpenAI, 10 countries, and the EU met at a summit in Seoul to set guidelines around responsible AI development. One of the big outcomes of yesterday’s summit was AI companies in attendance [agreeing to a so-called kill switch,](https://www.cnbc.com/2024/05/21/tech-giants-pledge-ai-safety-commitments-including-a-kill-switch.html) or a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. Yet it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds"
"A group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and AI companies’ outsize role in pushing for regulations in their own industry. “Experience has shown that the best way to tackle the
reddit
AI Governance
1716776365.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5tvbik","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_l5umuyk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5v0co8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_l5v1fy0","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_l5w1yx9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]