Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@omshree901 I understand your concern, but I will caution that you're falling into what I like to call a "smart guy trap". The trap is appealing to those smart enough to understand the potential implications of a technology, but not familiar enough with the inner workings to understand its limitations. That's why Bill Joy thought the world would be destroyed by "Gray Goo" nanobots, COBOL programmers thought nuclear power plants would explode during Y2K, and Elon Musk is worried about an AI getting out of our control. The reality is that the human mind has a number of features that make it a continuous process that we don't even try to replicate in AIs yet. Two key ones are conscious thought and imagination. So far, the neural network processes we create are good at replicating the unconscious processes of the brain. These are energy saving neural pathways that we create through learning and experience. Some task is really hard when we are first learning it, but once the neural pathways are in place, the task becomes incredibly easy. That's because using conscious thought takes an order of magnitude more energy and can't be multitasked effectively. The conscious thought processes are divided up into two sub-processes. The first is a sort of "select the answer" system whereby all the unconscious processes (which never stop running) are presenting their results to the conscious arbitrator. The conscious brain then engages the results it needs at any given point to accomplish a task. If you've ever had an urge to do something weird like swerve your car off the road, that's you suppressing one of those unconscious results. (Though the prevalence of that one suggests that we train ourselves to be ready in case of an emergency when driving. It just wants to pop up a lot at the wrong time.) The second sub-process is training. The human brain has amazingly good reasoning facilities that allow us to puzzle out a problem. If we do the puzzling out correctly, we can train a neural pathway to provide the answers with no significant energy expended. Improving that process requires us to take the process out of unconscious processing and back into conscious That's why you seem to get worse at, say tennis, the moment you try to examine how you're hitting the ball and what can be done to improve it. The conscious process is way too slow for something like that, and will require you to figure it out, re-establish an updated unconscious process, and then see if the results were effective. Finally, the human brain has a great deal of capacity dedicated to what we call "imagination". The function this performs is to run constant simulations to predict the outcome of the world around us. The differences between our predictions and the actual outcomes provide critical data that allow us to converge on better and better predictions. If you've ever laid awake at night trying to predict an upcoming conversation with a boss or teacher (especially if you think it's going to be unpleasant) you know how powerful this system is. The reality of AI is that we've gotten really good at creating the unconscious processes via powerful computers with extraordinarily large data sets. This allows the computer to understand how humans behave in a myriad of situations and mimic those behaviors very effectively. But at the end of the day, you have to realize that this sort of AI is a manipulation. Humans respond according to social protocols, which makes intelligent responses in ~80% of cases stupidly easy. We can use ever more powerful computers and data sets to improve that mimicry on that last 20%, training the system with responses we like and pruning the ones we don't. That will improve the mimicry significantly,. But if you spend any significant amount of time with these AIs, they will eventually sound like a broken record. They're unable to learn in the traditional sense as they have no sense of self, no continuous process, and none of the components needed to pull all these disparate processes into a sentient being.
youtube AI Moral Status 2022-07-01T11:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugwjxg6cznPzm-6i_eF4AaABAg.9cvGeh6XAWY9d61gBXfqxb","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy49zPvjcoeD1N9Dmx4AaABAg.9cvDc66ZmJv9cvE3BPfWrb","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cvyXcfip51","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cwTwSDBHG7","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9czcZpjlwOK","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwqWcGd--G","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwxNRYvbQf","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"hope"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxAEpaD38O","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxH5SAjs87","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxQYQ9tVE7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]