Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video conveniently leaves out how Elon Musk wanted OpenAI to remain nonprof…
ytc_UgzFLYXhJ…
G
this is too narrow of a mindset. AI won't look anything like it does today 10 ye…
ytr_Ugytao46q…
G
@aleksandra6769 it's pretty normal in darker roleplay to write depressing stuff…
ytr_UgzmlVTUB…
G
Wait till it starts shopping for its own hair and faces... you think shoes are b…
ytc_Ugw2vdXLP…
G
Totally agree the hype around AI replacing coders feels overblown, especially wi…
rdc_n7k341m
G
I get that the whole point of ai is to replace actual artistic professionals. Bu…
ytc_UgzFZZcBo…
G
These layoffs were coming regardless. It just so also happens to be that this wa…
rdc_m81ta9f
G
Current AI could easily destroy humanity - use it to craft disinformation on a l…
rdc_k0ca2c0
Comment
This is a very complicated issue, but I think this is going to be a similar situation to when everyone was racing to build nuclear weapons first. Everyone wants to be the first to have it so they have a deterrent to prevent others from deploying their version of AI in a hostile or damaging way. Everyone thought for a long time after nukes became a thing that the world was going to end any day now because we were in awe of the sheer destructive force that was literally at someone's fingertips, but it never happened (although it was close a few times). We may end up in a new cold war with AI tech going forward because people are scared of what deploying an AI against another country or countries would look like if it hit their power grids, banking centers or military systems. This fear leads to de-regulation of the people working on it and that's never a good thing. However, to play devil's advocate here... if we shackle our people in the U.S. or Europe, which leads to China or Russia making the first major breakthrough with AI superintelligence, then is the rest of the world a better place because they took the time to be cautious? That's really the heart of the issue and it isn't just as simple as we will all agree to regulate development because "it's the right thing to do". This assumes too much rational thinking on the parts of everyone involved... but this is about power, money and control at the end of the day. I don't think there is a "right answer" to any of this because everyone out there will always assume the "other guys"- whoever they may be- will not abide by the agreements and someone, somewhere is eventually going to have a "plan B" in place to combat the other side if they succeed in doing something that is prevented by an agreements.
Interestingly enough, I watched a cartoon recently called Pantheon about UI or uploaded intelligence that dealt with what could happen if people are able to upload their consciousness and essentially become a digital mind with full access to the internet. It was far more devastating than a possible AI because of just how much faster people could think, process information and still have a clear goal or intent. Just one more thing to be weary of I guess, but this is tech that is still far into the future.
youtube
AI Governance
2025-08-28T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxcIZBURjFZrNOIi0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwA3s9zZFiUaFchRXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6i94q4eAqwrwq2w14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8njK_97ioFxVM6Rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnqaGOuXMYdyW8r9B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]