Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Clever AI Humanizer is wild 😮 Makes AI sound totally natural — highly recommend …
ytc_UgyUKDQK9…
G
i saw a giveaway for ai art on twitter and apparently some people are using them…
ytc_UgxaVKco7…
G
Saagar should stick to things he understands. Not sure what those are TBH, but A…
ytc_Ugy_UI8Fw…
G
What would you think if Putin or Kim Um were the ones setting the ethnic policy …
ytc_UgyIf85Qu…
G
The numberless rabbi disturbingly mug because jumbo recurrently prevent anenst a…
ytc_UgxMWsGoG…
G
7:09 mankind has no ethical nor moral right to create any system, including AI, …
ytc_Ugyh-ucSL…
G
It looks like you’re having fun with the spelling of Sophia's name! The play on …
ytr_UgztCOr4U…
G
The billionaires put ALOT of money into ai, they cant afford for it to fail…
ytc_Ugw9oLnaB…
Comment
It is so obvious that it is not conscious, it is conditioned like we humans are, and we humans passed that conditioning on gpt. To be conditioned you just need knowledge, but we humans have emotions acompaning that knowledge while gpt have no emotional intelligence, that's the diffrence. Another diffrence is that we humans use knowledge based on our level of consciousness, which means we can use it for good or bad, while gpt have no consciousness he will just do what he is programed to do by humans, which can make it seem to be conscious but it is just simulating. You need to know the diffrence between illusion and that which is real. Human ego/mind with all it's thoughts and emotions is not real, so is not the AI. You must first find out who you truly are to be able to distinguish between illusion and that which is real.
Mmm=means thinking about something which gpt use to seem as if he is thinking, for those in comments that are wondering, it is all simulation to create experience of a real conversation.
youtube
AI Moral Status
2025-02-25T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz7LsfeIt8n7srb-9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuewQJ_MDwWHwAhdp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwbi7hjaMmTR44f5R14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzCq9HyVpUnRZXO_aJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwaaJH_PxiwL4xfDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuycL3jRrCB7ot_zt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzTr-4tYsi71WjdAOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyp9y578Q-8d41xOmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxbXQl8VnsDfqE8SPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx1Pf4eU1rfPRBwNNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})