Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get the impression it's mechanically impossible for AI to have consciousness, experiences, and/or proper inclinations. I mean, when I have a conversation with a simple AI, the way it works mechanically is that my initial statement or question is turned into tokens, those tokens get passed to a bit of code that uses the statistical model to predict the next word repeatedly until it hits a STOP. Want to make it a "reasoning" AI? It looks like we would just need to add another loop (with some extra text instructions) in the code and have the prediction piece through to STOP 3/4/5 times to extend the context of the conversation before generating the response that gets sent back to me. And if I don't send another statement or question, the AI would never generate another response. If we can ever get an AI that more or less never stops generating "internal" predictions and can spontaneously send texts to connected users maybe we will get super intelligence, but I'm not sure we can meaningfully create super intelligence.
youtube AI Moral Status 2025-10-31T05:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]