Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Am I missing something? Aren't LLMs like ChatGPT, Grok, Claude and others just predictive text generators? I.e. there is zero "understanding" going on inside those machines, and the tendency towards "self-preservation" displayed by LLMs is just them mindlessly regurgitating the general appreciation of self-preservation found in text they're trained on.
youtube AI Governance 2025-08-26T22:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw5KoS1BSrMhvotQS14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwpVtaFDQgwtg_UINN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTBUSsqyOpBcIZXFx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzmv8sT1Pn_coVSaA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9eQd0_AeQvi7TeG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]