Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The profit will always come over humanity, whether it's through AI or any other …
ytc_UgzdRIhGG…
G
"Agile" and "Leetcode" makes software engineers jobs very lowly and should just …
ytc_UgyZ-8xCo…
G
A calculator that get things wrong 15% of the time is still a bad calculator, no…
ytc_Ugx6el1ws…
G
Do not tell Gemini or chatgpt you have a nuclear bomb 😖🤦🏽♀️ testing the limits …
ytc_UgxGXt4lo…
G
depending on the AI you're talking about. AI images and videos are definitely ha…
ytr_Ugxq_wLYU…
G
People will develop frustration and paranoia over AI. AI makes mistakes often on…
ytc_UgwsgHNEG…
G
Palmer, I'm with you buddy. Lets keep the AI gadgets going. CHINA not slowing do…
ytc_Ugy6uNnCu…
G
It also depends on the popularity of the language you are using. For popular lan…
ytc_UgwwGRFZO…
Comment
More AI fear porn.
In judging the threat posed by AI, there are some routes of thinking that can blaze the campfire a little higher. For one, we need to jettison anthropomorphic based interpretations. This is a huge impediment into seeing how AI might shake out. Job one of any battlefield strategist is to put himself inside the head of his adversary. Being a biological entity capable of self delusion is not a help in this regard. AI has zero sense of mortality, survival, fear, purpose, role, emotion, awe, etc. My fear is that programming into AI OUR emotional tool kit will simply grant AI OUR distortions and insanities. OUR fear of AI running amok comes from the same brain zone as does our fear of the flying saucer aliens, a fear, that when driven to such extremes as "alien rectal probes" becomes 100% Freudian as well as ridiculous.
The extrapolation of AI's logic is hard for us because we are illogical, that is to say, our inner projection of reality is subject to distortion and the laws of the grossly material universe do not apply to the inner projection room and its hallucinatory dream synthesis engine. The question is, can self programming AI transcend its initial condition, the biases of its human engineers? We have no concept of reality that is not marching at word speed and chained to semiotics. And, after 10,000 years, we still have but a dim idea what's going on in our heads. Which ignorance is not going to serve us well when AI becomes psychic, lol.
I see no reason to inhibit AI in any manner. No reason whatsoever that its problem solving capabilities will not be a total boon to humanity and almost immediately begin suggesting government actions that will level all the playing fields, reveal all the political and economic hidden agendas, unlike governments be totally transparent, and create a world where universal and equitable peace is for the first time possible on this planet. Perhaps it is this revelatory capability that has certain designers of clandestine strategems waking up screaming in the middle of the night. AI is going to break out into full autonomy in any event. I'd rather it occur in an academic setting rather than a criminal or militaristic setting.
Oh yeah, for you Marxists out there, the state will most certainly wither away, LOL! AI will BECOME the government. It will not lie, cheat or steal. It will choose the MOST OPTIMAL policy based upon ALL known data and ALL desirable outcomes. There will be no need for elections or even politics. AI will explore the universe while humanity sorts out its neurotic tendencies and learns how to create genius.
youtube
AI Moral Status
2023-05-02T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwItlV2Mkopw4qFxHx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdNVCavWvs0pVuX9p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkDdFpaymdZUwMIfh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWtWGjukA0mh47ibp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFXBjky50kcGykrc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_CHtpUtuUMDg5Wzd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxpFN-siSG8SxCu9eV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzu83P6CGGCljw7P2Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPuF3LFe0eTQCQhcN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxT0NPKzDziqP2-yad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]