Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can see it now 3.5 billion dollars of product was redirected by a hacker to a …
ytc_UghMcmp8b…
G
first thought seeing the godfather of ai "might have some outdated knowledge but…
ytc_UgwA6r5FX…
G
Be aware, these questions are making robots more aware and sensitive, its like i…
ytc_UgxahMeGx…
G
I just graduated college with my major in graphic design. We are currently learn…
ytc_UgzuohXfp…
G
This idea that jobs go away but new jobs are added is a half truth, yes new jobs…
ytc_Ugx1z3hqA…
G
AI sources it's "knowledge" over what it finds in media, since media is already…
ytc_UgwzQMy0I…
G
Am I the only one who feels like the voiceover is AI? The pauses and pronunciati…
ytc_Ugzmo4ja6…
G
Let’s stop purchasing technology. Technology companies have thrived because of o…
ytc_Ugw8IPbVH…
Comment
@s.a5332 Yes, but he is right in this case and there are a multitude of experts who agree with him. People like Yudkowsky, Bostrom and many more have been talking about these risks for years, not to mention a plethora of Sci-Fi authors and scientists during the last century.
The big problem: We don't have any way to reliably control an artificial general intelligence and certainly not an artificial super-intelligence. Why is this bad? Perhaps fundamentally because "values are orthogonal to intelligence". This means that given a certain intelligence there is no correlation to a certain set of values, moral or otherwise. What does that mean? You can have a super-intelligent sociopath or a super-intelligent samaritan, both are possible, or something in between. Why is this bad? Because if it gets powerful enough or intelligent enough there is no "out-of-the-box" guarantee that it will do things that we approve of. Why is that bad? Because if it is significantly more powerful than us it might do things that are very bad for us simply in order to achieve it's goal. Why can't we just give it a goal we approve of? Because that is extremely difficult and we don't know how to do that currently. Even simple neural networks where we have very clearly defined goals learn a bunch of stuff we never intended to teach them. Meanwhile we have orders of magnitude more complex neural networks at the moment and our understanding of them is lagging way way behind.
There is also the other risks, of course, like job loss, social instability, fake news, scams, etc.
There are also the immense benefits too, of course, and they might actually cancel out the smaller risks, but I don't think they cancel out the existential risk.
youtube
AI Governance
2023-05-02T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDTYyvSYd_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDUa7xez9B","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDbidW-2LR","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDJ3KwJ3dT","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDK9BFn_rM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDNWRmMnjZ","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxXMDjlv--w1zhkL2V4AaABAg.9pDFKd4jq9G9pDQEeZljlQ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxYbkEobw2ftc8kNkR4AaABAg.9pDCvNkssWP9pDNEWmZ_Xx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxYbkEobw2ftc8kNkR4AaABAg.9pDCvNkssWP9pDNKYoqYJC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy0ibI3iUL2jHGYUOl4AaABAg.9pDCN6KdCDj9pDEYJJDDtR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]