Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more we see of AI, the more it looks like a global garbage race.…
rdc_kyzj07g
G
And here we go again. Big smooth brain money hungry CEOs thinking that AI advanc…
ytc_Ugy8Ye4zA…
G
I would have a hard time calling a robot like 'Harry' a person unless I knew the…
ytc_UggnfU6yP…
G
Can people stop thinking that ai will cause the death of an artist? People have …
ytc_Ugw4wymhG…
G
unseat billionaires and tech consolidation and pull the plug on ai datacenters ;…
ytc_UgwIeVmWe…
G
@crazydissy3893well people who steal work from others and then generate stuff k…
ytr_UgwrIeMCb…
G
If i have a problem, im going to jesus. An AI could never do what jesus did for …
ytc_UgxZ9OPku…
G
so if the double slit experiment is effected by conscious observer, then let AI …
ytc_UgwVt4hc6…
Comment
Great discussion! But is AI mimicry equivalent to consciousness? Mr. Hinton argues consciousness can be learned, using the example of a big robot vs a small robot, where you can train AI to detect danger. But, humans experience something beyond this, pain. Pain not only signals danger but also involves a subjective, emotional experience that shapes how we evaluate risks, like deciding what pain we’re willing to endure (e.g., pushing through a marathon). While AI can mimic decision-making by prioritizing certain inputs, it lacks the inner, qualitative experience of suffering or the existential awareness to weigh it, leaving a gap between mimicry and true consciousness, which we still don’t fully understand.
youtube
AI Governance
2025-06-25T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0nikON3sE88TIn9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwd4tjAj_M71QHdonJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-jc2vNvHl_8oEz9p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugybxd0qI1dLUrNnJVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy89jtlKlVf7tCXt3J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZEvj9yz38Mc925I54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyRWoplruDqln8msRd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyV03yJLJ27yniXDK94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx4TxXHxoKZAhzYRk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgZAisXCROf4RI8ml4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}
]