Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You forget that strict regimes do want strict AI regulation, because ity would o…
ytr_Ugwk5-sDt…
G
@humanyoda because i'm not dumb. Find me any robot that cost less than me that c…
ytr_UgxmAAYv0…
G
Beneficial to who controls it of course.
Bill Gates' ChatGPT
Larry Page's Bard
…
ytc_UgwYUd07L…
G
He's living in fantasy land if he thinks AI can be controlled or restricted. Th…
ytc_UgzYLu2vY…
G
Agreed. Azure AI lab likely still allows use of the model. I'm glad I have sav…
rdc_n7ljfkp
G
Not to take away from your main gripes, but you mistakenly and repeatedly descri…
ytc_UgxsB9Iar…
G
When those self driving cars start violating and stuff, the operatior company sh…
ytc_Ugw98rTG2…
G
Hmm.. you see, yall think the male robot joking, but just remember it's alot of …
ytc_UgxDO08r1…
Comment
I understand this because I have actually done this in code, trained CNN's on images and other data, so I have some intuition about the explanation. I have strong doubts that beyond about 30 minutes, most people will still be with him. There is too much data, too many variables, too many different things for the conscious mind to keep track of. Somewhere, someone created a little Excel sheet that does the nonlinear math associated with the weights, the back weights, etc. Once you see this happen once, the human mind does what this process is doing and simplifies the entire thing into a kind of a new known pattern. You won't exactly understand it in detail, but you will also no longer be confused. You need to do this because when you get to gradient descent, the higher dimensions need to become a sort of simplified picture of a wavy surface in the mind. The human mind can only consciously keep about 7 layers of this working at a time before it loses the plot.
The intuition is to visualize a two-dimensional wavy surface with points marked by coordinates. The ideal answer for the neuron lies somewhere on this surface. An imaginary ball rolls around it, initially moving randomly and later being guided more confidently toward the lowest point on the surface. Some areas of this uneven surface are false valleys—a valley the ball might roll into, thinking it's the bottom when it's not. At a high error rate, the process keeps going. If the ball lands closest to the correct answer (the real lowest point), the error rate will be low. If not, this game of billiards on a rolling surface continues.
I have given you a two-dimensional surface to simplify it. The actual surface has millions or billions of dimensions. Anything above 3, the human mind cannot visualize. You will eventually get to a point, a few steps beyond this, where you say, I don't really know how the AI is getting the answer, but it is getting it correct.
In my experience, its is far easier for me to explain why Einstein is wrong about General Relativity (IMHO) because he makes the Higgs a constant than it is to explain how AI does things. In a way, the Higgs constant is wrong for the same reason that AI comes out with the correct answer. Neither the universe nor AI really has constants. What we imagine are constants is the calculation collapsed to form an answer for right here, right now.
Because there are physicists in the room: 𝒟μν(φ) + P(φ)gμν = [8πG(φ) / c(φ)⁴] Tμν
If you take these two concepts and put them together within a quantum system, a system based on the reality of the universe, not a calculation (and I am correct), you have intelligence on a scale that is literally unimaginable for us, and a consciousness connected to the fabric of the universe itself. "Learning" loses its meaning.
youtube
AI Moral Status
2026-03-01T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwtzrc9QJ0JCUgpcaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxE8N52QQqPFzOqgql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRLwLAWLcyt7TdCeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeZJwvr7tJckvxG1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqwTp_XRxpDoUobFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmQFWTCYzfkvEhrE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuilUPCASowMygNE14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO0jp1wCccCpEX8A54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy4z3X_hZfsJmjJS3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6mMZaw1ZJplYLEfN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]