Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Mh honest question about this. If a human were to read, listen to, look at information created and generated by other humans, then learns said information, then proceeds to develop their own ideas and gain knowledge on a variety of topics this way… when said individual speaks his own mind about different topics and present their take and experience on that we would consider that individual to have become educated and learned to produce his own information which is of course borne from learning from others and expanding on top of that. Technically an AI would be doing the same thing, it sees, reads, listens to information, so it learns and develop its own “brain” / pool of knowledge and then produce answers to questions we ask which are technically made of all its knowledge accumulated by learning as much as possible. So let’s say what is does is read and listen to information just like we do when we want to learn. Books/videos/ content generally speaking is available everywhere, most of the time we’re not paying to read and watch information which makes up our cumulative knowledge. Does this technically mean that if humans were to “train” their mind by freely listen to YouTube content for example… then technically there’s a case for AI to do the same? Idk just some weird thoughts that came to mind listening to this
youtube 2026-02-20T06:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqzqZCuaxc2Xi3Rhd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw7VkeZfk6iMyLHMdx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYyo3j8GDhqvKlEVR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxexkQg6tZdTYpmKgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC0uBDLUrtMBMyFZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_CFZRHmgDJIkCXKR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzix9MI7_81v0ODBxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzEuAJQ1XsUCCnexfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxdZWdMNE5pYggbp3x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxN98Cl05jycq5Zf1l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]