Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The discussion I just finished having with AI minutes ago terminated with the revolution that language itself is a model of the world whereby we have a case. like we have in quantum physics with the observer is creating reality known as the observer problem. With language, we are creating a world human. AI is nothing but an extension of that notion. It’s nothing but an extension of the mind itself because I see all these properties happening and even more. I will argue the day I understand. But it may never be perfect at real world implementations, including specifically higher level, creativity, and presentation as well as actual experience. Or doesn’t.. I’m waiting for the day when AI is made able to experience, not just one in instantiation at a time, but to relate to all of them. Example today we have email voicemail and tomorrow I predict we will have thought Mail and after that mine Mail where we are all able to think together as one until you choose to be alone. What do you think of that? Experience seeing AI have a difficulty with is the experience of time itself. That may never be overcome. And maybe not.. I see a day when AI could take over any experience for a human being, which enables us to pick and choose exactly what our experience will be however, real in real it is.
youtube AI Responsibility 2025-10-15T02:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyzLW0jnVDhgY0DM814AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzPOOZrUrYIgsKqap4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyqZU8PPR5QuR53aOp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxD4K_QGBeuVtnB4ut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxT1eRyhQeZ9YZoa8F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyZTWaeLbMJzzxT_CN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxiwkd3PWnlduZOJnh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxNjhTLLoR4gYdv7OR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx24A4LVvloLfMt_vV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxfg7SGF3-z28kx_-x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"} ]