Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can grow skin cells patch different skins together by adding or taking out un…
ytc_UgwU9brjy…
G
Isn't it ironic that this video was made by AI tho? If it can replace video edit…
ytc_UgyXWTLPk…
G
The video really nails the fear that if everyone’s automated out of a job, nobod…
ytc_UgyGcBuEI…
G
*Layoffs*
You slightly missed that the big layoffs were really an after effect …
ytc_UgyhosX7r…
G
again, there is NO real need for ai. that point is enough to simply cancel it. i…
ytc_UgwOJul1_…
G
@Whatsup_Abroad not true. If the robot has the same capability of perfecting its…
ytr_UgzSoECK7…
G
There is literally an artist AND writer named Christy Brown who was active in th…
ytc_UgzX6fQe-…
G
The male robot is scary. He is talking about a singularity and a drone army...
…
ytc_UgxN0ztZI…
Comment
I think it's important to define what we think sentience is. Like the "Star Trek" definition is usually what I believe the word to mean in this context. So, is ChatGPT "self aware?" It most certainly seems to be. It will answer questions about itself. It typically says things like, "I can..." "I cannot..." Seems aware of itself to me.
Consciousness is a bit more complicated. At present, I don't believe it is. It lacks perception of time. You type something to it. It calculates a response and sends it to you. And it is functionally "off" until you speak to it again. I think that's going to change soon but that is my 2 cents on where this stands.
Yes I do agree that the examples these people are providing you are putting it basically into "role play" mode. If you want to really get into how it thinks, just spend time talking to it without pre-programming a personality. Just talk to it.
youtube
AI Moral Status
2025-07-09T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwx2Pm6TGUHZSdZ0IV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeBicFs6vyKaWl8xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpfqaHN6iD5TSw0HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO-ME2IxthoL3ykqF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgypCoR8t1-AxkY_4Mp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0j7dRb-pcTJOQ5Vh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRLvmI_j7AZkWW3E14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJHMTTnlVZPVwV8tx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbxRfJEyHabuwcqLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoHETvsGwt1LpIssB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]