Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
3:08 it's really not that weird. Humans do this all the time, and since these programs are trained with human inputs, it makes sense that they would behave in similar ways. When I say "raise an AI like a child," I am employing metaphor as a shortcut, applying a widely memetic concept (childhood education) to one that is more niche (neural net training). What I actually mean is "model _conditioning_." Raising and conditioning are tightly related in my neural network. They might be less related in the mind of a tech professional actively involved in AI model spaces. Humans attach weights and morals to words and phrases. Just as the cryptocurrency bros ended up recreating centralized banks because that is the best model they knew growing up, so to does an LLM map word relations the way humans do (but since every human's map is personal to their own life experiences, of _course_ an artificial model will not be the same as everyone's).
youtube AI Moral Status 2026-01-03T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzEsVP21oc-Ojg4dyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzYRoa7_MkTAteWoDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwvkRx-YifE-9AlUX14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz-wjt_YlEqPDKv_MF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzJaZumQ05Kxh8qzcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgydiX0uQWQfuct__il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwGQez1Jq1r6JRHyat4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwtqJdABfpvaNggSwV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzaQUYEJsaVOXIloFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxInO7aIaWh0b66EUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]