Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Mathematically, “Back Propagation” uses the first-order derivative of the inputs of the neural net to direct its education. It's analogous to pulling on the loose white threads in a hole in the knee of your worn blue jeans. Most are tight and do not yield, but some are loose, let you know how to progress in increasing the opening in your knee. Using 1st order derivatives does not make sense from the structure of neurons. Its analgous to talking in static current and voltage units. The brain communicates in the frequency range. It makes more sense to learn in the 2nd order derivative range. Much as a spider can find a fly in its web by plucking the threads of its web in the dark, the brain can simultaneously stimulate a myriad of connections in its neural net. In this way, the stimulus can propagate across an enormous number of nodes and cascade across many specialty processors. Stored connections that have a similar resonance to the stimulus are activated by the stimulus, and the remainders are mute. From a digital efficiency point of view, this is incredibly inefficient. Out of a 100 trillion bits expended, only a few reply, but as Mr. Hinton explains, the human brain has a vast advantage in the number of connections in our brain and a huge disadvantage in the number of computational cycles. Our disadvantage is even worse when you consider we have only 2-5 years to bootstrap the majority of our knowledge of communication. The next 15-20 years are fine-tuning that knowledge. AI Data centers are inherently limited because they try to solve a neural network calculation at 64 to 512 bits per cycle in a linear fashion with Gigaherz speed. The brain potentially energizes trillions of connections in a parallel fashion at Hertz speed. The AI still falls short with a gigawatt of power, where the brain functions on 20 watts.
youtube AI Moral Status 2026-03-09T16:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyAXJG9ixghY5d5em54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxe4F5sPU4fjYquAlN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyqWT8fG3KOt6LcUtN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw9F2WpSpAJwOnEI0N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugydcc7CF37xS28GsD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtsdB5rBXUtoAohhN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_XQpJLlYhdllxmEd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwSrpqqYb-C5gOrF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRQ2T6527DO6WcdX14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDqiN0twmoi07EOyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]