Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has made it much easier for me as a teacher to individualise learning—creatin…
ytc_UgyMA3mcA…
G
even if it could (much doubt) chatgpt is very likely going bankrupt within a few…
ytc_UgzsOFbgC…
G
What do yall think will happen to Louisiana now that the 4.3 million SQ ft Hyper…
ytc_Ugz3kmsey…
G
Oh you have no idea how blind some (smart) people are about all of this. They ar…
ytr_UgzJjfMgY…
G
isnt autocomplete the same way the ai figures out the eneergy in a human, based …
ytc_Ugw_DQynD…
G
My friend once showed me an AI rendition of his wife's unfinished art pieces and…
ytc_UgyBtJfpU…
G
If all human art depended on mimicking previously existing art then we wouldn’t …
ytc_UgyT7lMSC…
G
Waymo can’t compete long term. Tesla can manufacture 2,000 cars a day and Waymo…
ytc_UgyC-nDq6…
Comment
I strongly agree with your statement. I think people misinterpreted your idea as a claim that ignores the functionality of independently decision-making artificial agents, but if the argument you're trying to make is what I think it is, you are able to perceive this situation at quite a high level. This is because, in reality, regardless of the independence of AI systems - namely decision making - humans are the ultimate controlling agents in the grand scheme of things. It is true that decision-making may be detached from human intent, posing a significant threat to cybersecurity and the propagation of information. Reality is, however, AI isn't simply going to be making sporadic decisions that serve as a constant threat to humanity at large, being effectively operational on a large corpus of data. Any system can have unexpected results; unexpected results is not the big picture.
The big picture is precisely what you argued. This is why leading minds in the world in the field of AI are afraid of emerging monopolies or governments becoming increasingly resembling of an authoritarian regime. This is why Elon Musk keeps mentioning regulations in his conversation. Not exactly the regulation of the AI itself, but the regulation of how certain intelligent agents are allocated and mobilized. The largest forms of AI will exist in data centers, and the groups that own the technology will represent an extreme minority of the entire human populace. The propagation of information will then be an immense problem. As people say, the gun isn't so much the threat as the person that holds it. Of course, much more crimes will happen on the smaller scale as well, examples including deepfakes or the spread of very realistic versions of fake news we witness today.
People who have some form of knowledge on AI will be skeptical when you call it a "tool", but it really is a tool. A tool to corruption, a tool to information control and a tool to various avenues of marketing, hyperpersonalization, and many more, whether in in the light or in the dark. There are plenty of reasons for optimism but definitely room for caution as well.
youtube
AI Governance
2023-05-04T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx-2LM4K8ht1G7PrIJ4AaABAg.9pD7ITb-PJ09pD9xqbcTdY","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugx-2LM4K8ht1G7PrIJ4AaABAg.9pD7ITb-PJ09pDCs8yb05s","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_UgxbPcOP1TJDSiQ0SnF4AaABAg.9pD72Ms5n299pDlW51pJXZ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxnEue1P__V5UOZ0i54AaABAg.9pD6t3wto8M9pDJ9C5Q_lD","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgxnEue1P__V5UOZ0i54AaABAg.9pD6t3wto8M9pDrG3YHaiG","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxnEue1P__V5UOZ0i54AaABAg.9pD6t3wto8M9pH_XIrOnvE","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzjrV4N4LmUixQ-RFB4AaABAg.9pD6kxlDxJ39pDxWGLaUiR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzjrV4N4LmUixQ-RFB4AaABAg.9pD6kxlDxJ39pFIT8Qsc_Y","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgzO0pXSRGI7IFd-qyZ4AaABAg.9pD4EK80eIf9pD9ZJhzvcF","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyttQf3LC3uIBaTmC14AaABAg.9pD3l-ai3Y19pDpYCqxnbH","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]