Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why think they will gain sentience? He says people cannot define it but why is that an issue? People have a common self-grasp of consciousness even if they cannot express it in a linguistic sense, and with that in mind there's a difference between processes that mimick understanding and true understanding. Why think there's anything other than metaphoric mimicry at hand? The mall sensor "knows" that there's someone due to the weight, but it's clear that it's not a true knowledge. Even if it were more complex in the relations it can make beyond mere weight, and it could sense shape, for example, and "know" whether it's a dog or a box or a human, that is not consciousness. It's just the interweb of processes of relations without an internal core of consciousness. Does that mean there's no danger in the operations of said processes? No, but I find the entire notion of fear of consciousness pure magic thinking. It's being afraid that objects become "real boys". It's Pinnocchio. This seems to distract from the real political dangers of control, and it makes me supicious that he wants to to put control of information at the hands of central government. This is undemocratic and entirely authoritarian. If governments or groups/agents in control of that flow of information have the keys of determining what becomes legitimate knowledge and what doesn't, then government has a direct control of our semantic spheres and that is very troublesome. This would seem to be a far more pressing danger than "conscious AI", or at the lengths that the AI is currently, about a "super intelligence". That, however, coming from a non-expert. It may be true that what we see in ChatGPT as accessible to the public is unrelated to the true actual power of AI, but if the public access is of a real representation, then there's nothing to fear at all in that sense.
youtube AI Governance 2023-05-30T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgztGHRKtx8ISoGqL5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyUjVypf_KJ40ii7UR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4BYv4IIyKPIHwX814AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFr3bxD9d71FWNR1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWe9_ELZT6Ox_Fkd54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyGZs8ujAZWgMQl3rx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyo-VS7awQvg-EoNdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz3U2dLJy6G3MKIyLp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxi21zQei3af1-PM-V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuLsO_vLhFdYvXzBF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]