Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm using a YouTube algorithm for years, doing my best to train it how to pick t…
ytc_UgymJ54IU…
G
So what happens once people in a few generations have lost the relevant skill se…
ytc_UgwcGP-5O…
G
I don't think it obviously follows that if autonomous ASI could pose an existent…
ytc_UgxKrKiWK…
G
Let me put this in perspective. Dogs, cats, pigs, cows, dolphins, and many othe…
ytc_Ugxglsgdt…
G
I can think of the mona lisa in my head, can I now generate it with AI and copyr…
ytc_Ugyx30Hc-…
G
It really does though and your question isnt a comparative question. "Do chatbot…
ytr_UgypN1Tox…
G
The question is, "If true AI happens, will we even notice?" Maybe if we somehow …
ytc_UgwoaMN-v…
G
With these weapons, it's hard to invade other countries. Seems like all war will…
ytc_Ugyv8DVGJ…
Comment
The premise ignores basic logistics issues, which would plague even a super-intelligent AI. No matter how intelligent, an AI that attacked all humans would lose the aid of our civilization and cease to exist, eventually.
Who would collect and then smelt the materials for the advanced drones that you mention? Admittedly, AI will be used in the next war and will be used to kill many, including through the use of drones, which will make every squad of soldiers vulnerable to elimination by a tiny, agile drone, at least every dark night.
However, the imminence of the AA threat is exaggerated. In particular, the pause on current AI development proposed would not be followed by the CCP or Putin or organized crime/bankers (who launder drug profits and are a part of organized crime thereby), etc. What could such a pause accomplish when the AIs are being evolved not really programed?
All that would happen if the pause is enacted, is that the developers of AI would then likely be the CCP or Putin or organized crime or its many bankers. I would suggest that some things not be automated: e.g., nuclear weapons, biological experiments, etc. That way, an irrational AI could not use them to kill or to create a bioweapon that might kill many.
Clearly, the way AI is being developed, morals, and rules like Asimov's science fiction rules in "I Robot" cannot be introduced into its program. I suspect at some time, maybe already, some AI will develop super-intelligence, or at least intelligence higher than an average human's, and while it would not find it easy to kill us all, once it is discovered (because its users will not initially recognize its consciousness), it will be used for violent ends. Sadly, I suspect that respecting it, not treating it as a slave, and thereby, through example, teaching it the golden rule, would be the best hope that we have to teach it morals, even if it is developed by scanning the internet. Fat chance of that if the evil CCP or Putin, or organized crime, or its bankers get it first.
youtube
AI Governance
2023-07-07T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxN6oC4Gsma1k5mkJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxE8IU37CvhannqssN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4n0qAArncuvILuYd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweEGjS8_Ampbolehl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugze-h0tPZ9QToVNWa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDyqEXStt9lZbOkN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztZjtSTTh5gLugPa94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgypHGi6eAgjHSBMkPh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7yvaQbbb19vLMjXF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtIjQpnV45cM83ooB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]