Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A conversation that is totally off base. Try again, but next time don’t anthropomorphize the “behaviors” (‘actions’ seems a more appropriate word) of LLM machines/algorithms or the human-written programs mysteriously called “agents” for some reason. LLMs don’t “try” to do anything, nor do they have any intentions whatsoever, they simply produce next text tolkiens based on a matrix of coefficients produced during it’s “training”. Agents don’t do anything other than execute the code that make them up. LLM’s are powerless without agents, just machines good at predicting next lines of text and agents will stop if you install a line of code saying if the stop flag is high, quit, or whatever. The hype is through the roof on this technology and when the bubble inevitably breaks, a lot of people are going to look stupid.
youtube AI Governance 2026-03-02T01:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxvwo_02ZBF84zNxFl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_UgwXBgKbah3EcsSIicZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyMwXehTshcpUMgcGx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwxGXV8sPil8PGttYJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgydS01XTsdjHOPcj0V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwqwdyNjRghOD34QHp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy-J-u0uTLP2WDZ7k54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugy8yTqW3iLjS2ryQdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_UgyaWavpM1N3Q5FfogJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwsooizqu0pGx6aDfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]