Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Having worked in technology going way before the IBM PC in things such as 10 Bas…
ytc_Ugwbuxpvu…
G
I think AI is fine. It can always be unplugged/destroyed if necessary. It’s not …
ytc_Ugx75THv6…
G
As a Presidential candidate myself, who owns a Tesla model S, it’s hard to make …
ytc_UgxORfCJe…
G
isnt ai supposed t o be used like that,
i mean let ai generate some thing and w…
ytc_UgyjrjJQo…
G
39:58 Chemical Weapons have Zero Upside, unless you’re a Violent Psychopath who …
ytc_UgxOmEuoj…
G
you mean the best generation? because now you can use ai to create the panels of…
ytr_UgwfVTKlH…
G
I don't know why people aŕe afraid of them, you would be at more danger with you…
ytc_UgypkeVhG…
G
Solution to save humanity from AI: 1 law — only citizens can own AI/robots. Corp…
ytc_UgwXd37p9…
Comment
If you get the best AI in the world and teach it nothing but dogs? And I mean everything about dogs. Videos, different types, sounds, movements and so on, do you end up with something which will sit down with you and have an intellectualised converstion about the meaning of words? Or are you going to end up with an AI which is exceptional at mimicing dogs? Giving AI words which have inherent human emotions and layers of meaning built into them will lead to an AI appearing to have emotions and layers. Or, you put a spring opposite your front door and rig it to spring up when the door is opened. You write the words: "I love you" on a piece of paper which is attached to the spring to pop up. When your loved one comes home, opens the front door and sets off the spring, the words "I love you" spring up - is the spring intelligent? Does it have feelings? Could you just as easily written the words: "Woof! Woof!"? And so finally it follows: If all we feed GPT5 is the writings of children up to the age of 7, will it come out and understand the works of Hardy? Dickens? Shakespeare? F. Scott Fitzgerald? Hemingway? Steinbeck? (So these novels are not added to it's data set, but rather it interacts with them the same way you are interacting with GPT4o here...) Or will it answer questions about these works in a very 7 year old way...? I know this is only a thought experiment - but I think the result, the answers, would be really, really interesting. Or it will just complain that it doesn't understand half the words used in the novels. Or would it be able to figure the meanings of words it's never encountered? But the real interesting thing (for me) is when it's put in a body (lets give it 10,000 bodies to control), interacts with reality and makes the results part of it's data set. And I mean from robot helpers all the way to analysing data from hubble and james webb. Where it can create it's own experiments, carry them out and add the results to it's own data set, which then leads to more experiments... And so on. That's when things will get really interesting. That's what I really want to see. Self awareness is ok, a thing, I guess (I mean, what we want is a slave race of robots, so I vote any self awareness is tracked down, turned off and we avoid that going forward) - but this is where the real money's at. Hypothesis, experimentation, results added to data set which leads to the next hypothesis.... When it starts to do this. For me? This is the real singularity of AI.
youtube
AI Moral Status
2024-07-26T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzS1qMP90XW9hY4yU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAMvDIFkabXeRFfSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSX_G1ls5FmIY_1Y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugytm74TlVWH9--34Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwy6D_M9nPkz2AJUqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5A-57QIgx3pgx6114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZtmAI3xOVeDCDZrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpixWTtCNr_jzH4GZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZw4k-0V13mXPslat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7UywXWDKfmk74Is94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]