Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
there is clearly a kind of AI we are using now that just gives the illusion of intelligence. For example by annualizing its feed information and calculating from that the most probable next word. That creates an illusion , then it 'hallucinates' spewing nonsense, breaking the illusion of intelligence. SO, Is there something else? is there something that can really 'THINK'?
youtube AI Governance 2025-06-29T02:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzhaD8PcOEBpawrteB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuRQcbcR943ZXDwOd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLu4j1ceWsnvWY60l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzf6TtQvmWzS2ydtFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyLO0_QhLCT_jrg-fd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjRjnDFNJ2aljCLL54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwV4qEmS6bqelhGhs94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgygDjfIKd7FNgKVCNd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzjmtr4A78xm32DTRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7O9uqTFOCaI6EbU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]