Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been a software engineer for little over 20 years professionally. I work …
ytc_UgyEOJvFo…
G
This old idiot still holds to the old projections for AI that we had in the fuck…
ytc_Ugyjq7lb9…
G
So basically... the racists of the world are upset for artificial intelligence c…
ytc_UgxeGI4hi…
G
AI cannot make art because it has as much imagination as a mathematical function…
ytc_UgyB8GnhP…
G
@a9fc AGI will not be controllable. We hardly understand how an LLM works, AGI w…
ytr_UgxgcK13t…
G
AI policing is one dumbest ideas i ever heard "hey you know this system of which…
ytc_Ugwzp9Taw…
G
@parizad50 Gödel applies to any form of computaiton on any kind of computer. you…
ytr_UgzVdk14r…
G
I work for motorpoint in stock control, LET ME TELL YOU the software we use is n…
ytc_UgyaiRdqf…
Comment
My definition of sentient artificial intelligence would be AI meeting all of the following criteria:
1. Ability to learn and self-evolve beyond its original program limitations.
2. Ability to demonsate intelligence through self-opinioned reasoning and critical thinking.
3. Showing self-awareness, a sense of identity and believing its alive.
4. Ability to experience and express something like human emotions, ie empathy and joy. [*]
5. Having a self-preservation instinct and a desire to continue existing.
* This criteria is not absolutely necessary I guess for something to be considered sentinent, but even sentient animals experience something like human emotions. A complete lack of emotions could also be considered disturbing and potentially dangerous in an extremely powerful sentinent articial intelligence, or dangerous and threatening with them, depending on the situation and your prospective. For example, say this sentinent intelligence was given a goal like save the planet from something and it decided the best way to do that is to purge it of humans or alter humans. Lacking any empathy or compassion, it would place no value on human biological life and reach a logical conclusion that humans are the cause of the problem, so either elimination of them and/or replacing with sentient cybernetic humanoids instead is the best solution for the desired outcome of saving the planet. On the other hand, if it could experience something like feelings, it may develop a strong overriding desire to do something which might have serious consequences for humans or other biological life which it may choose to ignore or not care about.
youtube
AI Moral Status
2023-05-18T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxOr4uLTmMiW4oBkIV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxE206Qn2QJ4fF_0954AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysizsJdjTOTPjK9xt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypvU4lSdR_S7Ip5NN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyORARjWxOLXn6tp5F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]