Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a retired software architect and amateur neurobiologism and psychologist, I think the energy use problem is using one big neural netwrok to do the language construction. I expect people are already making "idiot savant" networks using far less energy at the cost of only being able to discuss one narrow topic. Your pc and phone have vector processing to speed nueral network programs. I don't know exactly where they are used, but it is becoming part of software architecture. To get AGI (whatever that is) without using all the energy in the universe might require breaking down the one big network into smaller parts that work together. The cerebrum, cerebellum, frontal cortex, etc are separate networks that all work together. If it was one big net, ut might have a 10 foot diameter and run so hot it boiled blood. When computers were new, they all had a single cpu running a single instruction stream. It only took a few years between the first computers and banks using them. For modern computing, you couldn't do it on mainframes. DP had to go from early, (often failed) attempts at parallelization to multi-core cpu, and high speed interconnects making server farms possible. This required basic science progress, architecture innovation, algorithms and coding. Ut took decades to figure out, wait for the hardware, and write the code. I expect the tantalizing results so far make us optimistic about the near term future. I also expect that what seems just out of reach will require much more time. With most everything there is an initial accomplishment, followed by disappointment, followed by a slow steady climb to something approaching the dreams/expectations of a new technology before we understand the limits or even how to do the general case well enough.
youtube AI Responsibility 2025-12-12T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzJD4677wXn6ZZa2BJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_813MxAtv1gyK4u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzNAd02qBx7Noc0mrF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwXUSXxGlVLzkXcoG54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz3CmBvEbqmY9qZ6D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtnqL2wcNYfTPSUgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyGTmI9WYL0ou-ANXp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzLdppqLlP8mQaAQyN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYZrYtNmu4CTLbu6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFo5pY00-f8IVodaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]