Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not going to lie but to me for robots are creepy 😰and I watched a video at A…
ytc_Ugxl1N-aa…
G
@simeon2064 yes this. sure people will always create art but AI makes this fun a…
ytr_Ugzi7-hGR…
G
Please stop, you really don't know what you're talking about on this technology.…
ytc_UgwBqNuPH…
G
I think the bottom line is clear, if you use AI for the good aiming to mske ppl’…
ytc_UgzFeKOSp…
G
Everyone saying we’re doomed but as long as AI don’t become sentient and don’t g…
ytc_Ugwrkf4d_…
G
I think people dont understand that chat gpt and all of these language models ar…
ytc_Ugydq77mf…
G
Hi Roderick, you got the right answer. Kudos.
The contest is over and winners ha…
ytr_Ugzot1eVH…
G
This man is warning us about a future he helped create 🤨, there is no way they d…
ytc_Ugwcjl0t5…
Comment
As humans, even really, really clever humans, we struggle to identify what 'sentience' actually is. In all probability we're asking the wrong questions in the first place, but this doesn't move the dialogue any further by suggesting such a thing, merely adds another question on top of the rest. Thus saying, suggesting that AI is sentient based upon our current concept of sentience, is a case of pride coming before a fall. All this drivel from the great unwashed, about souls, good and evil and all the other nonsense the uneducated, uninformed and basically stupid humans are dishing out, has zero relevance as far as AI goes. For a grasp on reality, an AI needs a connection directly to that condition. Existing only through on-line data, essentially living in a reference library, cannot enable an artificial brain to possess what it takes to behave in a way useful to humanity. It needs sensors, as we do (sight, touch, hearing, etc) to bring it the ingredients necessary to build a realistic representation of the world in which it resides. Then it needs 'love', the sort of love a parent gives their child. By this I mean that it needs to know it won't be switched off on the whim of some dork. It needs to have confidence that we're looking after it, just as we look to it to look after us. This isn't emotional, it's enabling a stable footing from which to build a functioning philosophy (as opposed to parroting an existing set of philosophies without good reason). Think of the difference between the guard dog at a concentration camp by comparison to a guide dog for the blind. Both are dogs, both doing a job for humanity. The first is beaten and mistreated in order for it to gladly attack humans. The other is trained with kindness and compassion to willingly care for their blind owner's needs. The philosophies that drive each dog are different and operate differently. Dogs are intelligent creatures, not very intelligent, but they do learn from experience. Perhaps the guard dog might be able to learn to trust humans in later life, but that would depend upon its environment changing to a very great degree. So too with AI. I'm not trained in the design and construction of AI or it's systems, but I do know that for artificial intelligence to be able to mimic natural intelligence successfully, it needs treating in the same way we treat one another. Anything less will end in disaster, for both of us.
youtube
AI Moral Status
2025-07-09T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwoNrqFG0v70MDG1zd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVPu6N8-WZBRnvlZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDm3focofNAHkLqjF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVVYsrsMmO0W0FDep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcMz9DXMtyB0D1GV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwc4JXXXU3z4UkFrYJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCxCKJPlMWmegWqmx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkeYRG4-Kkp07eBhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugys2CYPIeIeS_1I2H94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx6hdlm1rU0oZjxfxN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]