Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whilst I think this video is generally good, it uses a lot of terminology for what is commonly called AI (generative chatbots/agents) that has a lot of semantic bias - "knows", "understands", even the concept that these models are "intelligent" is not true. You wouldn't say that wikipedia is really smart because it understands a lot about history, for example. These LLM models aren't intelligent and likely never will be, no matter how far they are pushed. They might be a component of true synthetic intelligence in the future, and certainly can generate answer-like content, but they really don't have any intelligence at all, they are somewhere between posh markov chains and a chinese room experiment.
youtube AI Governance 2025-08-27T10:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx9o3HHeecqiJNnQZZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzTFrdabq10x2SiEiR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAB3h3nEQVzLx8TaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzprAF9yV335V4b9xB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqXbUrTuffWgqFaBB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]