Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"There's a lot of people I've talked to, and I think they may be only simulating thought and simulating intelligence." 🤣🤣 And no, I don't think an LLM "Understands" anything, it is a neat trick to predict tokens, and now with "Agentic AI" we take those predictions and automate performing action and giving feedback as context, which might feel close. If you extend this further, and say we provide tools like "Sight" and "Sounds" and "Tactile" and we feed those sources in as context, and then ask an AI (to predict) things, it will feel even more natural, but it is really just _predicting tokens_ based on us telling it which tokens are correct. We train the AI. And judgement implies decisions, and again we "bake in" judgement. So in the end I would argue AI systems really become an extension of their creator(s), which can include instruction to react based on runtime inference. You want an AI that makes judgement like "is Metallica good music? Yes." You can have it. You want AI that censures historical events? You can have it. One final noteworthy mention is that it _is capable of novel output_. Since it can take inputs and make predictions, we've seen in things like material sciences as well as music or visual generation, it can create combinations that haven't existed and are valuable. Which is awesome!
youtube AI Responsibility 2025-11-13T01:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx0RwwXySOsH_1xpBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnA3KPL8BAnr2wiBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_mYik8njTgtUuyRt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQsa5oatAo2lyABht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwkQnoPL-kSlvRccfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzZOMDEuLZF6coSayJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx07qJMUF3G1E1rfAV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzawyYQ32Lnq7QrWx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6WHuDAISruKoq9XB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEGkkK8RySHDzadKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]