Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can AI stop 100,000 angry unemployed men with AR15s cutting the power to the AI …
ytc_UgwP--J4T…
G
They're worried about not being the wealthiest in the world anymore because AI h…
ytc_UgzVkODKN…
G
Thinking in human-readable language doesn't mean the words mean the same - Our w…
ytc_UgwqkBNBW…
G
I’m curious. So I’m seeing a not so distant future where all jobs are done by ro…
ytc_Ugwd1C1IK…
G
Our hypercapitalist society requires us to work in order to afford our basic nec…
ytc_UgwtNQ7Q3…
G
Let's assume that that "test" prompts are different from regular prompts. Differ…
ytc_Ugzm3tp0Z…
G
Ai “artist” don’t get that, they’re pretty sad to look at. Whenever I see them t…
ytc_Ugxw9cC-0…
G
you know she cant sue, it isnt the websites fault, someone just used the AI, tha…
ytr_UgyvCIfTp…
Comment
Elon Musk's Stance on Artificial Intelligence: Is he Right?
Artificial Intelligence (AI) has become a hot topic in recent years, with many experts warning about the potential dangers of creating intelligent machines that could surpass human intelligence. One of the most vocal critics of AI is Elon Musk, the CEO of Tesla and SpaceX. He has been warning about the dangers of AI for years, and his statements have often been met with controversy and skepticism.
In a recent tweet, Musk stated that "AI is much more dangerous than nukes," and that it poses a significant threat to humanity. He has also expressed concerns that AI could become too intelligent for humans to control, leading to a dystopian future in which machines rule the world.
However, Musk's critics argue that his views on AI are misguided and that he is spreading fear-mongering rather than engaging in a productive discussion about the potential benefits and risks of AI.
One of the key points of contention is Musk's assertion that AI will never be as intelligent as humans. While it's true that AI is only as good as the data it is fed and the algorithms it uses, there is no denying that AI has already surpassed human intelligence in certain areas, such as image recognition and language processing.
Furthermore, AI has the potential to continue to improve and surpass human intelligence in many more areas, which could lead to significant advancements in fields such as medicine, finance, and engineering.
However, the issue of control is a significant concern when it comes to AI. Musk has argued that humans need to have more control over the development of AI and that regulations need to be put in place to prevent AI from becoming too powerful.
While some argue that this is unnecessary and that the free market will regulate AI development, there are valid concerns about the potential misuse of AI by governments or corporations.
Musk's involvement in AI has also come under scrutiny, with some suggesting that he has a vested interest in promoting fear about AI to advance his own agenda. Musk has been criticized for walking away from OpenAI, an organization he co-founded, citing concerns that it would become too focused on making a profit rather than developing safe AI.
Some have suggested that Musk's criticisms of AI are motivated by his desire to gain control over the industry and ensure that it is developed in a safe and ethical manner.
In conclusion, the debate about AI and its potential risks and benefits is far from over. While Musk's views on AI have been met with controversy and skepticism, there are valid concerns about the potential misuse of AI and the need for more regulation and oversight.
Regardless of whether or not AI ever surpasses human intelligence, it is clear that it has the potential to revolutionize many aspects of our lives. It's up to us to ensure that we use this technology in a responsible and ethical manner, for the benefit of all humanity.
youtube
AI Governance
2023-04-18T02:2…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzD_LAuu-Che3zWPwF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxW6-XI-bjv_RynZRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx07ASufsKdSMXrXUZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5PnGQJGA82icDFoB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcIlD7TPQO7PsGhc94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx82VX3iD-V6pwxvJt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzk4j19XeKbEYP71954AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz8QjJezaG9z1hsWQJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQnJlCS_dtv8bU6OJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx8VnJfUH66d_4s4t54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]