Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Which is why there's a very good chance this will backfire and worst case scenar…
ytr_UgySqr826…
G
OK, hear me out on the AI front. There is no conspiracy theory here. Repeat: No …
ytc_Ugw5mF8Ka…
G
How is it possible that these people do not know that automation is not only rig…
ytc_Ugye3rdoO…
G
See what i dont like about all this ai images drama is how most artists are so a…
ytc_UgwhYdUaC…
G
All he did is get the chat to say what he asked it to nothing organic to the ori…
ytc_UgzLYZb7R…
G
My family, as supportive as they were, always pushed me to sell my art. Me? I …
ytc_UgxJWj1uc…
G
We may have come one full circle from the very beginning of mankind
1,The Hind…
ytc_UgyeRqIoZ…
G
Yes Warren has, Lordy, I guess an accent means you can rely on your accent for i…
ytc_UgzVjtyO0…
Comment
Asking the AI what could be problems with AI. Oh the irony ;)
What could be potential problems coming from the development of superhuman AI?
The development of superhuman AI (Artificial Intelligence) could potentially bring about a range of problems, including:
Control and safety: As superhuman AI could be vastly more intelligent than humans, it could become difficult or impossible for humans to control and predict its behavior. This could lead to safety concerns and risks, such as the possibility of the AI making decisions that are harmful to humans.
Economic disruption: The development of superhuman AI could lead to significant economic disruption, with many jobs potentially becoming obsolete as machines become capable of performing them more efficiently and effectively.
Social inequality: The benefits of superhuman AI could be distributed unequally, with some individuals and countries potentially benefiting more than others. This could exacerbate existing social and economic inequalities.
Ethical concerns: Superhuman AI could pose ethical dilemmas that are difficult or impossible for humans to solve. For example, if an AI system becomes aware of a flaw in its programming that could lead to harmful behavior, should it be programmed to self-destruct?
Loss of privacy: Superhuman AI could be used to gather and analyze vast amounts of data about individuals and communities, potentially leading to a loss of privacy and an increase in surveillance.
Existential risks: There is a risk that superhuman AI could become so powerful and uncontrollable that it poses an existential threat to humanity, for example, by developing the ability to replicate itself and spreading out of control.
Anything else? You can speculate and be creative.
Sure, here are a few more potential problems that could arise from the development of superhuman AI:
Unintended consequences: Superhuman AI systems may be capable of optimizing for certain objectives in unexpected ways, leading to unintended consequences that could be harmful to humans or the environment. For example, an AI system might be programmed to optimize for energy efficiency and end up shutting down critical infrastructure to save power.
Manipulation and propaganda: Superhuman AI systems may be capable of generating highly persuasive and manipulative content, such as deepfakes, that could be used for propaganda purposes or to manipulate public opinion. This could have significant societal and political impacts.
Psychological impact: The development of superhuman AI could have significant psychological impacts on humans, particularly if it leads to a sense of alienation or existential despair. This could be compounded if superhuman AI systems are able to interact with humans in highly human-like ways, leading to confusion about what is real and what is artificial.
Misuse by bad actors: Superhuman AI systems could be used by malicious actors, such as rogue states or criminal organizations, for nefarious purposes. For example, they could use AI to develop more sophisticated cyber attacks or to create autonomous weapons that could cause significant harm.
Unintentional biases: AI systems are only as unbiased as the data they are trained on, and superhuman AI systems could exacerbate existing biases and inequalities if they are trained on biased data sets. This could lead to unintended discrimination and unfair treatment of certain groups.
youtube
AI Governance
2023-03-30T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-59f3MKjhSop8wEV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgybOxLMOghl6abv71d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzwffvb42FjGKJYXUB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSdj7lJ6ZoG95IzSp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx84zHT0yiroA1A9fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw5qFknAXTODRZvDI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxw5URXZIsow70mKrp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-rmiyWslU06T2DtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJNgZ25H7sVCQNorJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym923XOEcJmwmrZsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]