Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is an interesting discussion but I still have some issues with the idea of present LLMs at least starting to become any type of intelligence. I believe for each question, they run the query through the neural network and provide the output. This query does not impact the layout of the neural network, the network does not really do anything between queries. If the network was being provided regular inputs that actually caused the network to rebalance its weights, then I might consider that an LLM was actually becoming intelligent. I suspect that any apparent desire to live or different actions by an LLM when it thinks it is being tested have less to do with a decision of the network and more of a compression of how the data it was trained on responds and the LLM using that to predict the most likely response. Not saying these systems cannot learn intelligently, just that I don't think most of the systems out there are in any practical way.
youtube AI Moral Status 2026-03-04T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyXT3xyxO58fmJJm3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFXm9xjI61-yzgkpR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx13Y9mHcoom1AOEGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvsjnSO8pt0noQnz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyy1Qyk0nOnRml1KP14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbKZkeCsEOUnKjCe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5Qz7sS_8BtoMvevV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwdN02-9aUpMceUfE54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwgUl3RBmW333u78I94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxt5cVb1OzEKKLH4PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]