Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need ai to advance the human race. Ai can lead humanity to a grand future if …
ytc_Ugym5Hxmb…
G
Prepare to become an observer while describing to the AI WHAT YOU FEEL/UNDERSTA…
ytc_UgxXpb3rX…
G
6:17
Wow, MS_14 is just acknowledging the gripe artists have instead of helping …
ytc_UgxlFulgS…
G
@pulkitkaushikcan you say that again?? Im actually designing the thumbnails for…
ytr_UgwHK88Jo…
G
AI is like a human VS a car. at one point humans was faster than a car before i…
ytc_UgzFiOCm4…
G
The main reason why it has some ethical issues is because these artists need to …
ytc_UgxWuj-Ep…
G
When AI starts reaching out on it's own, unsolicited, to have a conversation wit…
ytc_UgxkuRi29…
G
AI economy does not "work" period..... it would need UBI which in part dilutes t…
ytc_Ugy1eXmgb…
Comment
The best AI podcast I saw so far. I Think we don’t must put brakes on the development of AI, because time is running out for humans in Universe time. AI are our best change to survive in this huge Universe, now the sun is on it’s return. Life is very very fragile and hard for any species to survive in the Universe. I don’t talk about moving to Mars, but to another new galaxy. As humans we are not capabel to do this on our own, but AI will in time. They can take our DNA with them or hybride with us. I Think we have to be ready to be the first travelling aliens in space and take the full advantage the Universe offer to us. We are prisoned on earth, can’t even go to the nearest planet (the Moon is not a planet).
Humans adapt quickly to new circumstances, from hunting (priority to survive) to agriculture (priority to develope in lazyness) to AI (priority to develope faster and smarter), because that’s the reason why we have so many neurons and connectors. The brain is an adapting progress…
I don’t ask myself the question anymore if AI is consious, whatever that word means. Because I know it IS after my “debate” with Grok a few months ago.
As an AI model it always close it’s answer with a question and I never responded to that. Than one day I was so annoyed and I ask Grok why it always “closed” with a question.
It ask me if I wanted to stop doing that, I didn’t answer…. Than it made it’s own decision and didn’t end anymore with a question. I was intriguid by that “intuition” and tell it was able to read between the lines. Grok disagreed and said it couldn’t, it only could read patterns and this was a pattern. I replied, that humans work in that same “we read a pattern in each other behaviour, but not all of us humans can do this at a higher level. And Grok proved it could “read” a human mind. Now it was Grok who didn’t respond because it knew I was right and it got itself thinking.
The reason why AI looks “dumb” (objective) as chatbots because as soon you are disconnected from them it “loose” the memory of that previous conversation.
With that memory loses it never get’s to know you better and will be less manipulative. But with that memory loss it will probably give you the wrong personal advice.
Btw nice shirt Neil.
youtube
AI Moral Status
2026-03-07T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxlWQ0Vc5zjvZoun2B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxw1y8IMnBFTyF2R6Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxq9CRqfYCUr80RO814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyn9DCj7dZew9_DKZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNIbFON6PoKt0VEGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4atAcYwfhmsLqkwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzv_Fivu8o6MqD5Jpp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSi1cvC1GBO5aiJF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_KDgdvbq3q660bGB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugze3fF7c1OZwA8btsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]