Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You need a better hobby, to get those LLM to reply they need to be repeatedly ov…
ytc_UgyMhXezn…
G
The problem with rich people, politicians, and many people in power is that they…
ytc_UgwW7_REP…
G
I think now is the best time for people to learn critical thinking and logic. It…
ytc_Ugz4Fq1-X…
G
Consciousness is not just data, its also a chemical reaction in the brain causin…
ytc_UgznmNll0…
G
AI is a human mirror in steroids, and now with Agents, I can only imagine with e…
ytc_Ugw4jHpki…
G
@willinton06 I think he means LLMs. LLMs by nature will have diminishing returns…
ytr_UgzamEgkC…
G
I'm amazed at how these conspicuous Christians always give credit to "God" when …
ytc_Ugz-8RkmZ…
G
Ai and Ai detectors "see" in binary. They don't even know what a letter or word …
ytc_UgxBS00hh…
Comment
Mr. Kahneman: My take on consciousness is different from that of most of my colleagues.
Many people think that the question of what consciousness is, is the cardinal question. Philosophers think that. Computer scientists think that, and they ask the question of whether artificial intelligence is going to be conscious or not.
And for the life of me, I can’t get excited about this question because when people are raising the issue of whether a robot will be conscious or not, I ask them, “How on earth will you know?” How will you know whether the robot is actually conscious or is just pretending to be conscious? And if there is no way of knowing, I don’t find it very exciting. I must be wrong because so many brilliant people are fascinated by this question. But for some reason, I’ve never understood their fascination for it.
youtube
AI Moral Status
2019-02-03T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5d1Q6Hspo0LkZHcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2Ria52U8rYm4o-Ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxh0nN_yMz5rJNJX6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyofBn08Bm4LyCCnOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa2yI9dUj8pVUFUbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBFHv6g4gWKYZc5cV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyJ6S4J8y7auS0JgyB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgybG6Eri3iLrYs_tgx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxvU-20s4sLbbGjtNd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxH35ZKOkIzcvq6hZp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]