Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First thoughts about this AI title... AI doesn't "think". It is a series of culling data over hundreds of algorithms. It has no connection to the information, it has no concept of good or bad, it has nothing to say this is a concept and this is a physical thing- becuase it has no perception. The basis for it output is combed over thousands and thousands of results, refining its "yes" results apart from it's "no" results. Generally, algorithms dictates the model of AI currently. "Learning" is about telling the algorithm what is "yes" vs "No". AI has no connection to conciseness. For that level of sentience, AI would need a totally different type of hardware processing the data. Today, all the same hardware as inside your computer is still doing the processing of the data. No matter how complicated or trained the AI is, it still has no "understanding" of the data- even if it produces correct answers, it has no real opinion- and can be tricked into stating absurdities as output.
youtube AI Moral Status 2026-03-07T13:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzs9zSfS5uGXFVUA6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzK71h7YHSgb0HUE3x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzX-cL4oJjigJgSeIh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyEl2Od0pjUIemK4_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5FXsnRx8eHGqMrhJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwD3nzUI__8xdc-2WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxr4f7PkSQxQXGsGH54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2LZm1LvQt356Lxrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFHzFrHqXnPiEZD0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzg10KRr6shQAsW7t94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]