Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eric with AI is like Elon with FSD... "the real QA is in the public" and so what…
ytr_UgzQjbMTM…
G
Interesting conversation about AI! There are several more places you may look at…
ytc_Ugy1oVFR1…
G
The problem with ai "art" that, to me, give it basically a short livespan for an…
ytc_Ugx2-e5Ym…
G
You may not like AI art, but man this video gives a really good example that it …
ytc_Ugx3zF6ve…
G
Who's going to buy their products and services if no-one has a job? Bye bye prof…
ytc_Ugzw-fpy8…
G
This is another video that shows why AI is so dangerous (more so with gullible p…
ytc_UgzXPRLiT…
G
> Anyone that thinks AI can replace all devs is an idiot.
Or they do very, v…
rdc_mte1tkh
G
Every AI fear we have, China is going to ignore and do it anyway. At least this …
ytc_Ugw3nEHyR…
Comment
I'm sorry... but they fail to realize that it's going to happen regardless of what we say. Maybe the US will abstain. But do they honestly expect that *everyone* will?
It is inevitable. And better that we all have a roughly equal handle on the topic; because whoever get's the lead wins the race. I Mean, imagine if we banned it, but China worked on it secretly. Next thing you know they would have so far advanced from us, that it wouldn't even matter if we found out. There would be nothing that we could do stop them.
This is no fucking different than any other tech. It provides power, and needs to be used responsibly. Hell, simply on the topic of power, if we ever want out of this solar system we'll need to learn how to harness enough energy to blow up the entire planet; and then some. We can't simply decide to confine humanity to eventual extinction because we might fuck it up.
And that's what the argument against AI is. We have this tool which could unlock a MASSIVE potential for the human race. Solve so many issues. But, here we are, arguing whether to agree to never use it. To confine ourselves to our present state. All because we're afraid the AI might decide to make plans of it's own.
Keep in mind, too, that AI has **NONE** of the wants and needs of humans. They only need electricity, which we provide. Not food, clean water, comfort, love, sex, entertainment, time off, etc. No fear of death, for themselves or their cohorts. Death would be scarcely different than a reboot; something they would go through regularly. We would have to purposefully program them to have traits that could make them want to have something beyond what they need. Want something that they have to take from us. So.. How about we simply not fucking do that?
WE are the threat. AI is only a tool. How we use it the real problem. And banning wont work. We need to provide regulation and oversight. Some framework that prevents people from "accidentally" program
reddit
AI Governance
1438019041.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cthrg40","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_cthyg0v","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cti461i","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_cthnpmn","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_cthzbiy","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]