Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im so glad AI cant fix a machine when it breaks down. Thats my job! 😁…
ytc_UgyuzYJPL…
G
Nothing about AI has been able to answer technical questions with any amount of …
ytc_Ugx2oC4S0…
G
Me when someone saw my poly ai chats:IF I CAN PROVE THAT I NEVER BROKE THE LAW😭…
ytc_Ugzpe96Xy…
G
What I have noticed is that AI doesn’t take the right decision but rather its an…
rdc_ne1t4dy
G
I think the comment made didn't quite hit the main problem. If you didn't start…
ytc_UgysffE09…
G
I believe that it depends what you do with the art.
AI generating takes time, a…
ytc_Ugyq6_yck…
G
Fake detective agency also using AI and they will cross all line without any leg…
ytc_UgwYRmJ7m…
G
You don’t want to read this… just skip or down vote.
The truth is Gen Z is ve…
rdc_o464enx
Comment
AI is not self-aware. This has already been proven by Roger Penrose in the book "The Emperor's New Mind." He proves that Turing machines by themselves are not capable experiencing of self-awareness. ALL AI that exists in the world currently use Turing machines only. It might be possible at some point in the future to create self-aware AI, but this would require hardware which mimics processes in the brain which are still not understood. In other words, a Turing machine is sufficient to perform any arbitrary computation, but this alone is not sufficient to produce emotion or awareness. There still exists functionality of the human brain which lies outside of the ability of a Turing machine to replicate. The mechanisms that produce this functionality would need to be understood and reproduced before AI would have the capacity for self-awareness.
youtube
AI Moral Status
2025-07-01T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx78hE5hMStX9NTL2h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZ7KxMgUx6KbsqU_F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFRMtmFyg0EQk3_fN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtrmuduj1HvmXq-2l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpY2dMXfLDX3VW6QZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyzBN1dqbQoDNoAyNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw-LuSZEIxXP-QXU3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSN6GWGTFTaq8og754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYbIjB9hbjkJ4GIgx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwUTJhBzOZ3uCTZgw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]