Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Capitalism and free market economy will certainly not hold against advanced AI. …
ytc_Ugzt-75wn…
G
I am not afraid of AI or robots with human level intelligence. The problem is mu…
ytc_Ugzk1wDxb…
G
Your apparent quandary has a simple solution; have an important AI construct/pla…
ytc_Ugxw06KaO…
G
"Artificial Super intelligence could decide it doesn't need humanity."
When I l…
ytc_UgyIjkuD0…
G
I looked and there are a handful of African countries with GDPs higher than abou…
rdc_et79chw
G
It seems even more ironic, that this came out 1 day before Amazon, laid-off 30,0…
ytc_UgzslXRHb…
G
@Da-gh7cx People want to work, they just don't want to be taken advantage of. Re…
ytr_Ugx4VTbWM…
G
Art is my passion so I hate to see people use ai and claim it's their art 😭…
ytc_UgxRJZNa8…
Comment
The other thing that's really frustrating is the tech dork preoccupation with "sentient life" and "consciousness" with like zero interest in learning or trying to understand anything about what that means. Nobody really understands consciousness. Sartre's Being and Nothingness comes pretty close to it, but even struggling through trying to understand consciousness, it's difficult, probably in many ways impossible as a conscious being to understand consciousness.
But to the extent it can be explained, it appears to be a mixed system of perception that is in significant ways emotional where memory of events passed is combined with a perception of the flood of present moments that pass into the senses and a dream like perception of an imagined future. This blended dream like experience is likely what is consciousness. But it's certainly more difficult to understand than that.
We can deduce that many animals are conscious in a similar way to us because of how they behave in response to stimuli, how they reason through situations - for example hunting, escaping hunters, using simple tools, etc.
I guess it's disappointing that there's a large group of intellectually insecure people who are far more concerned about appearing intelligent than learning and understanding anything.
He just spews jargon, first order, integers, floating point numbers, just talk fast and use words that describe complex concepts that have nothing to do with the discussion - and keep spewing them and then say, we need to charge the subject - axioms, premises, moral framework, "I'm uncomfortable with the fact that people die." Meta ethical framework. Just a ton of nonrespinsive jargon.
And there are cubicle jockeys commenting that Wolfram is a "nihilist" without any understanding of that term.
I think the tech oligarchs did this to these people because they aren't supposed to be able to understand the meaningless of their own lives and also the dark nature of their life's work, which is building the tools of total totalitarian power for a group of people they really don't understand and don't get to hang out with... except at annual corporate events.
This way of being and thinking is a necessary manufactured thing to permit programmers to tolerate and even enjoy their conditions of life. Since AI is going to first automate maybe 70 of software development jobs, it will free all of those people to do more interesting and human work and to stop having to live like that. They will be freed from pretending to be awkward.
youtube
AI Governance
2025-09-13T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzCnJEYgS4Gt6aF4mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2SpFDrQzFraxnnKd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyOqt8CXQJ8RdsMO9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyOOTryi47R_beqi6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIAa6SDkpXaA0j1dJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymNBA5-cA903RN8d94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgySrkaLrZkp5tjm5ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwfKuG-8cR_batwnEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzLC6OzO5UJWwE3sft4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCFO_EfVHH2DcNgUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]