Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
very good and sincere presentation, thank you. I do believe that because create, EQ and a number of items in purple will be very difficult to achieve without having emotional, implicit, and sentient information and data feeding the models. AI doesn't have a central or peripheral nervous system like humans do. AI doesn't appear to harbor intuition. I know a number of these elements like EQ may not be very well developed in some humans, these humans have the mechanisms in the embodied brain and cognitive systems to achieve it. AI does not. I do think there are neuroscientists (Hebb an early ML researcher was one) and psychologists working on this, but I have not seen much connection to cutting edge research and development being done in the Big Tech labs and organizations. Suppose we had actual measures of a person's psychophysiological response to say a political ad (can be measured today in Neuro Labs around the world) and a measure of the implicit associations they had that are largely unconscious yet influencing thought, decisions, and behavior? Could a mixture of experts model be created with this fundamental emotional data context and inputs to advance capabilities of AI? I have not seen anything like this, yet, especially from the commercial side of the AI and Tech industry. Just a thought...
youtube AI Responsibility 2026-03-15T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxcpjZGauUOuOIq0A14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwRnYU4RnoaqzrCpmV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzv0fdd1l5Wucf2TJ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxwAiYWDlTBdbAOAfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy2v8W63QVRG0r2klB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxOJ7WYzWNnHvXztTF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxNNa-tpK8iuz5xtsx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzYlBgqg6veX_GWKsh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx1e88_c100DYIEx-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwfN2p4KvZGefjkyTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]