Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Geoffrey Hinton admits we don't fully understand the human mind or consciousness, yet he still speculates that AI could one day become conscious. That’s a significant logical leap. If we can't define consciousness or explain its origin in ourselves, how can we credibly claim a machine can possess it? Artificial intelligence is, at its core, a system of pattern recognition and goal optimization. That alone doesn’t explain how or why an AI would develop values, desires, or motives beyond its programming. Claims that AI might “take over” presuppose it wants something, such as power, survival, or control, but these are human drives, not inherent machine properties. Unless an AI is explicitly programmed to survive, replicate, or dominate, why wouldn’t it simply idle, shut off, or execute its last instruction passively? It has no inner will, no self-concept, and no moral compass. At best, it mimics thought, but it has no intentionality, no love, and no spirit. It is not a moral agent. It is a glorified calculator with a convincing interface. The deeper fear is not that AI will become sentient. It is that we will start treating it as if it already is. That we will project our own humanity onto it, hand it power, and obey its outputs like commandments. The real danger is not AI replacing humans. It is humans forgetting what it means to be human.
youtube AI Governance 2025-06-17T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzljMaRZtl_nzzLg-94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLKhTgbpY-gGjF6iF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxz8dCJlgLV32bftJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw65N6zbOrZUIS4UvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2TG0zTGq2oS2crld4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwcfUZ3PrbRNMsqSot4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyzGD7wVUTRZCsiwJ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-1aJzVyEJdgaWjZ94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqOYsz3aQhvou12Pl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6DosDpS_i9e3cSjN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]