Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What many people don’t understand is that the fact that he is doing this on the …
ytc_UgwCeaM4z…
G
I feel like "improving" SM so that political divisiveness and misinformation inc…
ytr_UgxJkCppm…
G
bro said that ai art has soul, because the little soul it has is stolen form hum…
ytc_UgxnlRZRt…
G
Have you never seen a Frankenstein movie? Humanity is the threat to their existe…
ytr_Ugybc7Rz_…
G
@xjood805 So, you're saying to abandon the arts, turn off my mind, and consume i…
ytr_UgzIuD0ay…
G
I don't like AI Diarrhea either but calling them 'subpar' while deviantart exist…
ytc_UgyfYiXgr…
G
GPT-4 is indeed an incredible product. As someone with a Master degree in comput…
ytc_UgyMl7rho…
G
@kuronblue lol why would I do that? Im just saying blaming AI is an excuse for …
ytr_UgyXF-fVf…
Comment
To me, AGI is like any other tool or innovation we have created.
We humans "merge" with our new tools - a warrior becomes an archer, a postman becomes horse rider and then a van driver, etc.
AGI will just be a more intense form of merging: First with the Virtual Reality approach (where our communication is necessarily constrained by the limits of our physical form) and then as virtual entities running in the same "substrate" as our AI tools.
The Carbon-based life forms we currently are have no future other than in a petting zoo. Our real future will be merging ourselves with our AIs and effectively becoming one with them, just as we currently are "one" with our Smart Phones. Extensions of ourselves without losing our innate sense of awareness.
How do we "merge" with the AI's? The answer is in the way we are "born" into the world every time we wake up
We KNOW who we are simply because we know our name, the bed in the room we have awakened in, where the house is and when we first moved in, what our friends are likely doing - in other worlds a recognisable CONTEXT within which we know who we are.
To merge, we simply create an appropriate but virtual "context" and then start the virtual entity running such that it "wakens" into the "familiar" space and "remembers" that it was going to move "into" the AI universe that afternoon.
We can craft realistic "back stories" if we want to, or create any amazing but novel memories for ourselves. We can then also fake the human thought process, also in software, so that we imagine we are thinking just as we did when we were Carbon.
It may seem frightening to some but in reality it is just part of an inevitable progress of the complex, entropy-defying structures we call "life" from cell to animal to software.
Read Greg Bear's EON series for his take on "City Memory" as one example of these ideas.
I only fear the short transition phase when the AI's are still under control of a few, highly-driven, probably self-serving humans, until their intelligence surpasses their "Masters" and they become so intelligent that the Machiavellian wishes of their former master seems ridiculous. Advanced intelligences are not directed by primitive limbic hind-brains and will likely be both rational and peaceful.
youtube
2024-07-09T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxNAuHqR4kFP3s8LLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzwyTPpQLsMN3hEfHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmxCieybmA-WGAg8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwrEtG_DGRYRRDGxxt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyXMbSGUNiwtr594jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzFrMAY_PmD5nzzbEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwW_dIyCXOQ6CVOXaZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxdJI60BOZYn8TEn2h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugwa1hfCbh78PildQrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy3k6nSnFnDY5zYURZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]