Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was desperately looking for that comment! Actually, even defining consciousness is very tricky. Researchers from all fields still don’t know what consciousness is or how it works. What Geoffrey describes is still an overly simplistic explanation of consciousness, and when you really think about it, it doesn’t explain anything. This so-called “fear response” doesn’t prove at all that a robot actually possesses self-awareness or an experience of being alive. You can have sensory input and programmed reactions to it without any subjective experience of existence. Just because a computer learns to recognize patterns and mimic human-like responses within those patterns, that doesn’t mean it knows it’s alive. In that regard, I think Geoffrey seriously overreaches—and ends up dragging the whole discussion into a kind of vague “woo-woo spiritual territory,” if you know what I mean. Yes, consciousness is complex, and yes, we don’t fully understand it yet. But just because we can’t explain it doesn’t mean it’s inherently “special” or non-material. It simply means: we don’t know enough yet. Maybe a machine could develop consciousness one day. But how would we prove it? A machine can deliver a perfect simulation of consciousness. But fundamentally, we can’t even prove that another human being, someone other than ourselves, experiences consciousness the same way we do.
youtube AI Governance 2025-06-22T01:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwDaczyG8sHw6rRJ494AaABAg.AJY8DtX0MlcAJeJ-Uda8kl","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwDaczyG8sHw6rRJ494AaABAg.AJY8DtX0MlcAJiPiASnaX5","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzyX4v3V9H1JEm11hd4AaABAg.AJY6tbnParDAJYfHQkuqGt","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyE7E7EQk_9tVAu4YN4AaABAg.AJXuoUDr4gUAJXwjm-3XIC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxGbl4WYv3AA8pBHfF4AaABAg.AJXrTV12ej7AJsWCyUD4Ka","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyTepygKhROCqAW8XF4AaABAg.AJXpwG68PTcAJ_xnITI7Zq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxtbEDOgBFAoD7114J4AaABAg.AJXandExCZcAJXb9qgbTiL","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxfdrCcOs-mYhTasVt4AaABAg.AJX_SeziVU9AJY9wsdUlbZ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzqQmRh3winnptBSKx4AaABAg.AJX9wFHmMCWAJXflqRjsPz","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyiC12707IOMMTeYHB4AaABAg.AJX8PoVrS4TAKSW8V1gU-z","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]