Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Google's AI is not conscious. Current AI systems are configured to cascade an input signal from a set of input nodes through layers of intermediate neural nodes to produce a result at the output layer. There is back propagation that refines and trains the network, but they do not have anything like the resonating feedback systems that even the most primitive animal brains have. That is not to say that they never will have that capability. They will. And sooner than most people think. But unless the system is free to ruminate - to build and refine a state map of the world it perceives, including itself, and to remember previous states of that map and then to - on it's own - run 'what if' scenarios on that map to think about the future, it doesn't really do anything like what we call thinking. It is an electronically tuned pachinko machine that spits out a clever looking result based on its training.
youtube AI Moral Status 2022-06-29T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzZbKT2htf06EJbEw14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCmEeDgMw1IfMWUtR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNDuMR2PseY4UNaaZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzz4Bl3hpXd2JO1IAF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0YHjYOdkmMTT6kXp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"recognize","emotion":"approval"} ]