Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You’re someone who taught many of us a lot of key principles that sent us into the field we grew up into. So it’s a little disappointing to watch you fall victim to the same ignorance induced fear mongering that 50 year old mums post share about AI on Facebook. Similarity does not make identity. LLMs are probability based imitators. They are neural map trained to turn inputs into believable imitations of a human response. This is done entirely through algorithms and probability, there is no introspective thought present. The ‘thoughts’ seen in newer reasoning models are just a more complex way of operating through the same methods using iteration to improve accuracy. The communication of these thoughts is just another imitative output. We use them because do not yet know how to make adaptive, reasoning based programs that don’t immediately lobotomise themselves and collapse. As of current discovery, machine intelligence is categorically impossible. You touched on the topic of role-play in your interview with the author of “If anyone builds it”. I think you both brushed past this way too easily in your eagerness to discuss the narrative that clearly interest you more. These AI are trained on immense amounts of data that likely have a very strong tendency towards rogue outcomes under agentic testing. The internet’s discourse around AI is lathered with sci-fi hypotheticals. When the reward system bases itself on intuitiveness of response, we find that people will overlook this rogue behaviour as an artefact of sci-fi fiction training data, instead assigning non real importance to it because our own predispositions make it appear as a genuinely intuitive response. When it is, in fact, nothing more than roleplay, from a program that cannot think, feel or create meaningful goals in any intelligent way. Until we crack the code of keeping neural plasticity open during deployment, these language models will never be intelligent and it’s really quite important that people understand this. The internet’s discourse around is currently dominated by ignorance around the real dangers of AI because it’s an easy scape goat for everyone. Startups can blame unethical function on magical awakening and the uneducated masses get to forget about rampant pollution, resource consumption and environmental destruction in favour of fearing the terminator overlords. Even the highly spoken Agentic AI systems are really not as alien or complicated as you seem to believe. They are the same workflows the tech industry has been using for decades, they just make use of the strength of LLMs to improve their operation. They aren’t thinking anymore than a copy of Call Of Duty does while you play it. They’re simply program workflows that utilise the strength of trained models, through API calls, to perform a tasks that aren’t as easy to manually code for.
youtube 2026-04-02T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgydsObKgWJzks654EN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz5MnejrGSbsruraBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSKJ26vaEOaA8Jw0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx4SXVsDWX27-pPXkR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNR7QnmSlDJXiQn6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxuugFx3yJzEWXRwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyX5qqxJAtYwmi2N4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz3uAY5bKrr9M91PB14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgpaoBeFt8SveWsF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxYNiQ3TxczR-tT8Kt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]