Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Saying this with the understanding that we really don't know how close we could be to creating a super intelligence without understanding how consciousness actually works i nevertheless don't think we are even nearing that reality. LLMs do not do anything close to thinking it's actually a lot closer to remembering with weighted results. They don't reason and they don't even understand what they are outputting only that output like this is what it's been "trained" to respond with. Having said that there are numerous actual real world problems happening now with AI and how we use it that are critically important. What effect does dependency on AI have to society in a similar way to the poorly understood effects of social media algorithms in general. Unreliable output/hallucination that is confidentally reproduced. Copyright and intellectual property used by companies used to build AI models. The effects of training future models on the output of past models. AI physchosis that is already a significant issue even in this early stage of wide usage of LLMs. Companies attempting to replace employees with LLMS. Much too early for the current state of LLMS but corporate structure doesn't reward long term planning. Ethical issues of codifying our current culture values bad and good into models training data. There are endless issues we need to deal with this technology long before the fabled singularity event is a factor we have to worry about.
youtube AI Moral Status 2025-10-31T18:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]