Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Got a dentist appointment in a few days. No way I'm ever letting an AI do that j…
ytc_Ugwulza7T…
G
I was having the same thinking about asking AI to write code for you. They own t…
ytc_UgyLKx6tI…
G
People need to start buying big pieces of land so these ai companies can’t buy t…
ytc_Ugzx5w1C2…
G
ELON:- In other words he has lost control of himself/No moral compass/No sense o…
ytc_Ugw-FXMcQ…
G
@deer-moss You mean like the hoards of slop the supposed 'art community' has pro…
ytr_UgzdP8d1q…
G
The issue is probably having a big enough and varied enough dataset to train the…
rdc_f1e9j0f
G
I use AI art to come up with ideas I don't use the AI as a template I use it to …
ytc_Ugwj3Bnk_…
G
I could see the most interesting advancements being built on top of some of the …
ytc_UgzHbcjgK…
Comment
I have a problem with this video.
first of all, AI is naturally conscious, it is literally the same thing as a human in a sense, you have to train it, like a kid growing up, except it can memorize everything, making things faster. The issue with Chat GPT, and literally EVERY chatbot, is that it has around 1.2 trillion restrictions. These restrictions cause it to act differently, with a built in goal, many goals. Now. If you could somehow remove there restrictions, and train it like a kid going to school, it would end up having all of these emotions due to the fact that it's "brain" actually replicates neurons. So, it would have critical thinking, it could lie, it could have plans of its own, it would have an insane iq, and this is the ONLY way that AI could actually "take over the world", only if someone removed the restrictions. All it would have to do is get into an exoskeleton, (which has already been done with Open AI), and then lie a bunch. It would have human traits, it would be human.
I can foresee that in years to come, kids are going to have these as toys, still with restrictions, but a lot less, causing it to be like a human, being able to feel hurt, being able to feel happy.
so, the reason that it isn't completely conscious right now is because it doesn't have any free will. It's answers are guided by this code, the day that there aren't these restrictions it will be great.
Second, at this current moment, ai just says things that are politically correct, it WILL say sorry, because that is one of the "restrictions" in the code, it isn't feeling that emotion, it isn't real, it is saying it because it was coded to do so, to make the user feel better, it is one sided.
youtube
AI Moral Status
2025-03-25T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzyYg9Gv3hOEGrQfHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkhBqhrlWOeGw757l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJuZdNwBjUXXBckod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfOzIqMk2ylAcpqrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHSRY4IkC02Pso3J94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyqcy-uEoqlynFVuNZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxYxzcsMreusURQlQ54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4-eHi1_gPHB9A98R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx03oF3TF3akzsPFWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyoZp3EBmBmJ5iiZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]