Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"When one AI unit learns, all of them can learn"... yes and no. Who is going to inspect every node in the neural network embedded within the AGI to make sure it is learning the "proper" thing. The nodes are basically just numbers mutually influencing other nodes (essentially meaningless to human). The number of ways the nodes can mutually influence the entire system is practically infinite. Supervision on the other hand is just going to be based on a finite set of physically observable outcomes and the "approved" model will be duplicated. This may be fine for a narrow scope dedicated AI system but it's not going to work for an AGI any time soon. Unless there is a human monitoring the AGI and have the final say whether to use the statistically-based suggestion provided AGI gets applied exactly or not, a fully automated AGI could prove disastrous. Now couple that with the sufficiency of information available to the AGI (human users tend unintentionally leave out details most of the time due to what we can call contextual knowledge). Here's a very simplified example: A painter can walk in to room and the client can just say "make that white" and the painter will know exactly what to do. Now you, standing in for the AGI... can you guess what the painter is required to do? I mentioned "room", "painter", "white"... Is the painter required to paint the wall, the ceiling, or shirt the client was wearing in his oil painting portrait white? Now we all know our favourite LLM tools are dedicated AIs. Even with a dedicated AI, we still run into problems quite often if you ever pay attention to the results and not just accept everything as provided. Now, lets consider a fashion design AI tool. Majority of the time, the AI generated design would require manual tweaking of color, profile cutting, length, width adjustments before the design is actually practical and to taste. In comparison, human intelligence is different when it comes to learning a task, concept or anything for that matter. We dont just learn from one source and then transfer that "learnt model" to our kids. Otherwise we would be tranfering all our right and wrong bias to them. A good thing too, because Im sure we wouldnt want a man who thinks wife beating is a ok and transfer that to his sons. Similarly the simple copy and pasting of weights from one AGI to all AGI would be a very bad idea. There needs to be some allowance for contingent plasticity built in to the population of AGI (like how a kid can be the sone of a wife beater but learn on the fly the proper social values from his surroundings). As far as I know, we dont have that yet in any of the AI system (all existing AIs follow the bias presented by parent company/lab). But then again, if any AI gains such capability, there is also the danger for that AI to become sentient and that could swing in any direction - good, bad and whatever in between, just like a child can growing up. So no, I dont think the concept of "When one AI unit learns, all of them can learn" is a good concept to have
youtube 2026-04-19T05:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwchrjmlV_cUCTARkR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxG822Jqq7mb1dxGwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaYK9W-7sZii4brqx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqHJPb-8_hry4G2VJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzhrd5Fysz4Zo-iOSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0TZb4838vEAJewp54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxXT7lzcwuQcSCwwCN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQPEmcPcLOvESswRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyswolHTH0O82du3R94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwSQoqrocMajzFWB_R4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]