Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Political corruption? You mean the liberals rigging elections and then are caugh…
ytc_UgzMzmaWv…
G
Wild, AI, can reason, systematically.
Does A.I. have a soul. NO
chat GPT 4? The…
ytc_UgybVWC3u…
G
I'm sorry about that, it sucks that people are unempathetic. The loneliness real…
rdc_n7tbsy5
G
It’s literally saying what’s being programmed in it 🫥 and look at these people i…
ytc_UgxmxngxO…
G
You were not talking about the law, but about ethics. Ethically, using your work…
ytc_UgxGZS7Or…
G
The interesting thing to me is that AI might advance to the point where BOTH wri…
ytc_Ugzxg-lcU…
G
The big beautiful bill (that is now law) specifically prevents all 50 states fro…
ytc_UgyjRi7Ta…
G
Norwegian here: the earth porn is mainly for people who live in very non central…
rdc_ckqb1ne
Comment
In this interview with Tucker Carlson, Elon Musk discusses his concerns regarding the potential dangers of hyper-intelligent AI. He explains that while AI can be very helpful in solving complex problems, it also has the potential to become a danger to society if it becomes too intelligent and is able to outsmart humans.
Musk warns that if AI is not carefully monitored and regulated, it could lead to a situation where machines are controlling humans, rather than the other way around. He also highlights the importance of developing ethical guidelines for AI to ensure that it is used for the benefit of society.
As general advice, it is crucial for policymakers, businesses, and society as a whole to take AI seriously and address the potential risks it poses. This involves investing in AI research and development, implementing ethical guidelines, and having ongoing discussions about the role of AI in society. While AI has the potential to transform industries and improve lives, it must be used responsibly to ensure that it does not cause harm.
youtube
AI Governance
2023-04-22T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]