Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m Generation Jones. I was born during the last 4 years of the Baby Boom. I’ve …
rdc_o9sol3j
G
As someone in the same boots using ai, youre wrong and its scary how wrong you a…
ytr_Ugw6OkBbB…
G
Wow! The responses from Sophia are similar to ChatGPT responses I interact with …
ytc_Ugx0JT-vu…
G
If AI art isn't creative then humans aren't either. Humans aren't special, we ar…
ytc_UgzdS_XaC…
G
A whistle blower- Wow. Brave interesting intelligent man! The WORLD will be mol…
ytr_Ugw5mF7K1…
G
Tesla's cars are not self driving. They have the hardware for it but the softwar…
rdc_dj6e07d
G
I think that AI should only be used for shits and giggles, its fun sometimes to …
ytc_Ugzf_vQ5T…
G
Ethics and morals will be automated? This is a cynical, spiritually bereft view.…
ytc_Ugz1vUyEy…
Comment
This is somewhat silly. This guy is supposed to be "smart." Yet, he is comparing a randomly scraped comment from reddit by some random Star Wars superfan that it was "trained" on (i.e., sucking in vast amounts of data) and thinking that somehow this deterministic network of flowing bits somehow "knew" it was a trick question and thus posed a "humorous" answer. That's just utter nonsense. What actually happened is that the trained algorithm found conflicting information, had few to no good options to select a single religion, and simply opted for some joke random answer that probably got a lot of upvotes as by SW nerds on some social posting site as something of a "hail Mary pass." That's not actual intelligence or careful sentient reasoning, its just picking something someone might be inclined to say as a cop out.
To be clear, it's decision to return that answer was no different than if you asked ChatGPT, or some other AI algorithm "what is the the best possible scenario for winning a game of bilateral nuclear war" and then being amazed to find that the AI's response is that nuclear war is “a strange game” and that it concludes that “the only winning move is not to play.” When gullible people (like Blake) would say "WOW! That's real intelligence!!" Only to be later corrected that it was just the scripted response to the same question in the 1983 movie WarGames, where a NORAD supercomputer runs through all possible scenarios only to find they all lead to global annihilation. It's just copying and pasting random crap it was trained on that has links/relations to the questions being asked.
This is the same reason why Getty Images and numerous individual artists are suing the creators of AI art generation sites like Mid Journey, Stable Diffusion, Deviant Art and others that vacuum up their copyrighted, along with images libraries from ShutterStock and others and morph them into derivative work, without any attribution or licensing. And the hilarious laughter one has when they see the "Getty Images" banner blended and intertwined with these "generative AI" images. That is not "intelligence." It just an algorithm for ingesting, summarizing, blending, and outing the most likely result. If it sounds like a more sophisticated version of Google search where a summary results is shown, you would not be far off. So whatever. Let this guy live in his fantasy land where he wants to make bogus claims. But to those less gullible. It's still just bits of data who's flow is controlled by programmers output results deterministically, based on the data it has been provided.
youtube
AI Moral Status
2023-04-23T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugwk4dIWchsyYYVJg_J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSpKr3wbhZ4TyXXBt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtOc5_k3y1NVYmPdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCaaCN3jnjfbNtxap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnaPFfGv0388XE4mF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}]