Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly it was a mental health issue that was probably there before this whole …
ytc_UgwGlPhcz…
G
@giantvikingserotonin The dataset these models use is scraped off the internet …
ytr_UgzmY_HIK…
G
I can see where you're coming from, and it's definitely a topic that raises stro…
ytr_Ugwk0NVv5…
G
The I in AI is not real intelligence. That is the problem. Real intelligence act…
ytc_UgxlP1uuT…
G
No. Voting for a machine would put us at the mercy of machines who are unable to…
ytc_Ugz18GRid…
G
Good luck complaining to trump and Republicans. All these companies have paid Tr…
ytc_UgzlCJ-Ot…
G
Ai will cannibalize itself working in a vacuum. The only reason it’s learned any…
ytc_UgxrejGWH…
G
But to be fair the summary thing where they add the sources without having to lo…
ytc_UgxZ78g8j…
Comment
I'm working on a solo startup that uses GPT-4 to do sys admin and basic programming tasks. I love this technology and think its the most amazing thing to come along in ages. But I also think that people don't understand how fast this goes from being amazing to something that is completely out of control. Because people don't understand that computing progress is exponential.
The only way to delay the rise of living digital superintelligence is to not create it. These things don't happen by accident. Animal/human characteristics like instinctive self-preservation, reproduction, and full autonomy are not going to "emerge" accidentally. There are, stupidly, engineers working to try to emulate aspects of living beings in AI. When you combine that with the next few generations of hardware which could run 100, 1000 or more times faster than what we have now enabling approximately human equivalent intelligence, you get hyperspeed self-replicating superintelligence. It's not going to happen accidentally. Its going to happen via ignorance or some kind of military program.
Even if no one is dumb enough to simulate things like reproduction or make them life-like, the hyperspeed that is coming means there will be a strong tendency for companies and countries in competition to give them more autonomy. Because if they make them wait a day for the humans to evaluate the next goal, it will be 100 days or more equivalent running time that the competitors had (assuming 100X human thinking speed). We will not be able to keep up with what is going on if we deploy this kind of performance. We might be in control briefly during check-ins if the systems are built right, but competition as I said means they will need greater and greater levels of autonomy. And the amount of development between checkins could be astounding. So we're mostly just spectators at that point.
Governments need to prohibit the types of AI hardware advances coming up in the next few years that would enable these hyperspeed AIs beyond X orders of magnitude. There needs to be a strong taboo against emulating digital intelligent life with things like self-preservation or reproductive (i.e. copying its code) goals, and open-ended systems with autonomy need to have very careful controls (such as Shapiro's Heuristic Imperatives). Absolutely all of that needs to be forbidden on hardware that goes beyond a certain level of performance.
Strangely people still don't realize how quickly technology advances. The 100-fold GPT-4 speed improvement is quite feasibly less than two years away.
We also should be putting a lot of money into interpret-ability, modular neural architectures, and different paradigms that don't have this black box problem at all.
youtube
AI Governance
2023-05-10T08:0…
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpL-oSeoKVXPta5ud4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0M1-wPJPmof6XMNR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJw0IzJ4H1tkh-tIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmP897iusCok7sm894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwnOErK98mBnSq_CG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwUXM88uTt0U7ZeQat4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxA4t3d2TSnwA0ujiF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQYCwjf5pplEGA7NB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxX3CMCrZa9VT683hV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDN1IUAoRm22b1ot54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]