Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh please!
*Its not YOUR Ai, you Hippy!*
You are just playing with a predictive …
ytc_Ugwkr9vz0…
G
It’s just not sustainable. UBI doesn’t provide the amount of income needed to li…
ytc_UgxzL-ekW…
G
I can tune in to peoples energy if I have a perceptual link to them and I had a …
ytc_UgzLST1Gq…
G
One cukè conversation, yet not convincing. IF, u get AI rt?! 👀 O, so y’all not s…
ytc_Ugz1FgKx_…
G
As somebody working in healthcare tech, this should NEVER happen. What should ha…
rdc_jtfmlbi
G
Execs love ai because it does THEIR jobs really well. Proving that they were nev…
ytc_UgykQAUd3…
G
They’re already automating restaurants, fast food and grocery stores. Plus not e…
ytr_Ugy1izOUh…
G
I can't help but recognize a resemblance between the methods that they are discu…
ytc_UgwMl0lzP…
Comment
Even a slow take-off of super intelligence is dangerous if the problematic AI model can hide self-preservation until release. It's as stated by both sides an unknown and therefore you get that one chance, that's why AI safety is important. I must add that focusing on current safety issues could be quite productive to stifle existential threats as well though, so both sides are not mutually exclusive at all.
Most people of power are not benevolent. If Yann says "we are building it" it's exactly that which scares me; "we" suck. Equating AI to sci-fi is a strawman, it's already here. AI is often called the last technology, and so it's equally likely that Yann's promise AI will be always subservient turns out to be hilarious (to alien archeologists). To be fair, I really like Yann's idea of implementing constraints because this can prevent instrumental self-preservation. Instrumental means that it was not a pre-defined goal, but an implicit goal needed for success (ie. to bring you your tea). Also, if a system is energy constrained then there should be a sense where it's simply too cumbersome to plot how to conquer the world (just to get said tea). I mean, I don't need my toaster getting all hot plotting doom.
AI is a threat in the sense that the military perceives threat though; capability regardless of intent (and future surpassing capability). There will ensue an arms race and certainly safety aspects will be less focused on. So in that sense AI will pose more of an existential threat -> Current artificial intelligence needs regulation for current issues, so certainly future superintelligence will need regulation for existential issues, be it 5 years or 1000 years away. If it's 5 years away that's more of a problem.
When Melanie addresses innovation via AI in fields like medicine she is correct, but she fails to extrapolate risk to general AI. The "resilience of society" is actually the suffering of many in that society. if AI only kills 1% that is still an existential threat for that 1%. A bullet flying to your face is a severe and urgent environmental issue. Companies (which are like underpowered future AI systems) are doing bad things right now and these are certainly existential threats for a subset of humanity. Moreover, the systemic issues in which these companies and AIs will both operate compel auch agents to take short cuts. The exponential growth of accessibility ensures bad actors will be those agents. These are not mere speculations, but observations of similar systems.
The thing is we don't know if intelligence and benevolence are orthogonal, it could be that super-intelligent AI by definition is benevolent, but it could also be it isn't. Technology has always served to fan the flames of the forge. In the end, it's about making a research agenda that tackles current issues in order to empower knowledge on the more fundamental issues shortly thereafter. Again, both sides are not mutually exclusive.
Yann says things that are not safe will not be deployed, but that's not true. Companies put lead in gasoline because the benefits to them were perceived to outweigh the drawbacks to others. Max is right when he talks about humility.
youtube
AI Governance
2023-06-28T12:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx3xw4e8ocKyU_ZQVB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvFU5BKq0WWt53Omp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyKKv9bE8upTa5Sbyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyVKsTelwk9yglYzrV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxPb-0iY4pi7iqejet4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHCTk_4zxgU8wyWEp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzypf_v-asdoe7_Nh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxIX-TJyadxzvqSmI94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxfe8sR2whVY9uI4Jd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwETgxNS3mPOdvrjtV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]