Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hinton is greatly exaggerating the capabilities of AI. LLMs will never develop AGI or have the same degree of will or self awareness as humans. All they do is simulate. They will also never have mystical experiences. With all due respect to these men, they are reductionists. Reductive materialism is fundamentally flawed as it completely disregards all questions, thoughts, and experiences which have no immediate empirical application or research as nonsense. This is a flawed premise because reality is inherently mystical. We can peel back and understand it layer by layer but it will never fully unveil itself. It won't in 10 years, it won't in 100 years, it won't in 100,000 years. There will always be more to its mystery. The more we understand, the more we realize how much we don't understand. We must learn to Be comfortable in our ignorance, and revel in its mystical nature. We shouldnt quit peeling away, but understand we'll never fully understand. Discarding the big ideas, such as conciousness, because of your strictly empiricist approach and some vague interpretations of LLMs nature is wilfully ignorant.
youtube AI Moral Status 2026-03-03T06:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyFqUkCIVzhz-mXmNl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4z81ar1SwxOG5rjt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrPe1ZE7oRTnyiDyd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcFCb9COtOxDgpd6x4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzkUyOwftQ6yG0VYpp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmAC-U20kzWvaavR94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCi4DCnHa5qMWN-Mx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwEnu2Ep5wVTltjCDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzJMfdfXPvWVH8KoqN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxN2ivcNkfbVTJExKR4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]