Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The speed of data center expansion around the world is simply fantastic. It is e…
ytc_UgzkeDBBv…
G
7:10 loved how chatgpt realized that being compared to Peterson has a negative c…
ytc_UgxTZ_435…
G
With Ai development going this fast…. The movies about them robots 🤖 will become…
ytc_Ugz_cp2rj…
G
Just watched 2 Waymos stuck in an industrial area parking lot trying to wither f…
ytc_UgyDcp0WL…
G
As an ACTUAL artist, and one that specializes in digital art specifically and kn…
ytc_UgzhNAu69…
G
AI will increase humanity’s productive capacity, cure formally incurable disease…
ytc_Ugyl2UOxF…
G
Thank you for your comment. It's true that the interaction between humans and AI…
ytr_UgzBbF1NT…
G
HOW TO CREATE A DEADLY VIRUS IS AVAILABLE ON A BROWSER SEARCH ON HOW TO SETUP A …
ytc_Ugx_glX3c…
Comment
AI is here to stay. Can we regulate it? I doubt it. There are to many unethical individuals who will seek advantage over fellow competitors or governments to believe AI will totally be used for the benefit of humanity. Unfortunately, when you deal with human beings, greed, arrogance, and paranoia will be amplified by these programs and machines. I would bet that even as we sit here, AI systems are being designed, perhaps even deployed that are capable of harm to humans. This is a nuclear bomb, and it will be a question of deterrence, not regulation that is the answer. We had better hope the good guys can build AI that are as capable as the ones the bad guys will come up with. It will be a very hot yet silent war, carried on through fiber optic cables and massive AI servers. Good Luck good guys.....
youtube
AI Governance
2023-05-30T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyG05qbSrChY8mtmj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuswAXPOkE5pPRjq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJGZHLCH22z5N9Dyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfvIBRLiuobQKIbl14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQs6HKqIJSsf7Oz794AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwty_yZ65XoKN0RhH14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugz8vaCKPZD_P1IFl3B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5MY4FqCC8HXOFbjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXkdnTM1ED0e2v4Mh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwn9q_IRpXgz3j7ygZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]