Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Using Generative AI is just tracing, but with extra steps. If I have learned something, thinking back at how I traced pictures from a Garfield comic book as a kid, AI is basically tracing fragments of different pictures and combining them. I have learned this while dealing with language models like NAI. And yes, I am using a language model to train myself to improve and diversify my writing. What I write is neither directly AI generated nor AI assisted, I wanna be very clear about that, rather I study the AI's reactions and conclusions to what I have written so far and use that information to steer a potential reader's expectations in a certain direction. That's how people are SUPPOSED to use AI. As a tool to automate tedious, repetitive tasks or in the case of language models, as a neutral external party that is not biased towards or against you and your work (unless you ask it to). What I learned doing that, is that AI doesn't understand a single thing it's doing. It chops up the data you feed it into "Tokens", which are tiny fragments that are nonsensical by themselves and all the AI knows is which tokens occur in which frequency with each other and how to reproduce that pattern. This shows me that an AI is basically tracing my work, but in many tiny fragments, rather than as a whole or in a few smaller samples. The principle is the same. It is theft. Tracing with extra steps.
youtube Viral AI Reaction 2025-04-25T00:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwDJ5RH1I0sIWKAwWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwHNZxP8JkW7u9cIgJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx8MnDb0cSZQVg75Qd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0meCiziEYydysP5F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGuQYqy4Zcy1ftMbR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDjHFY79KjbV9nF_V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugych_LZHduMkoQIN214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNdS3LJhVjlsqONI14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz4PHd03DQZ6KkF1FF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugw7rpvKHMk5tJIJXHd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]