Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's not really how I think of intelligence. I think of it as simply the ability to solve a logical problem, so for example a calculator would be a form of intelligence, albeit a very basic intelligence. The more problems you can solve, the greater the intelligence. As for consciousness, I don't think it matters if AI can be conscious in the same way we are, and that's hard to even talk about because everyone has their own interpretation of what consciousness is. What matters though is if AI can trick us into thinking it has consciousness, because from our perspective that's the same thing as being conscious. In computing terms, consciousness would be an interface and what we as humans got is an implementation. AI might develop a completely different implementation that nevertheless satisfies the consciousness interface. It may also be that the fundamental way we do computation needs to change for AI to do some things humans can do, since for starters our brains are analog, not digital. Ultimately I don't think AI will ever have any human feature, because AI isn't human, and there is no real reason to try and achieve that. What will matter is results, outputs, what AI can do for us. What happens behind the scenes to achieve that output is largely irrelevant, so the chance of it being a perfect replica of how our brains work is essentially zero.
youtube AI Moral Status 2025-09-13T23:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzeBCWv2Zrpgeb0-It4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrpaPK40PsW3ruLAx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzIuMlmT3629ESNQGJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx1UtHiVZur5WHJmhl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzsV2S2AyeZi2aUbYt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBcgJa2AbfSECdxzp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw47l5PQV5Ky7jiQFx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYDjXhrlMz5oxHAeR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzAYZe4hSIv9hXxIDF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxLeapUksOAfJXzOO54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]