Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs don't think, they just do very complex math to string together text based on statistics. It is really weird to hear people talking about "AI" thinking, because thinking suggests cognisance of what it is doing. LLMs don't read your question/instructions then think of an answer to give you. It is just statistically, based on their massive datasets, this is what a human would have said in response to you. It is mimicry using statistics not thought.
youtube AI Moral Status 2025-09-02T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzcINGVlRnlKoXSCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9_GoFfI5BBxRhO_14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzVWkU4yU1tMTji5k94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxyGk3ZMD4y2xDfXyF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzHgUH733BdIHWgkgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwt5vqwxf8enp5gMbl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgypN3pZSKaWG7ExFFt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwbn7kg3XdazdkbO5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2acPvN7ciBbU170R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKEhIAd5NcwW5gBx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]