Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until they make better AI to fix other AI mistakes or correct their code when it…
ytc_UgwqpO-zA…
G
Yo CNBC it doesn't count that she said she wanted to destroy humans because she …
ytc_UggaFC2v8…
G
If AI has no emotion what motivates it to kill if not by human command.…
ytc_UgxD6Osyv…
G
As long as AI companies monetise creation of content, they MUST pay for trainin…
ytc_Ugx5F0p1Y…
G
„There are a dozen music generation models trained with well-known songs from va…
ytr_Ugz4azKfK…
G
So... germany recently sued openai, hopefully its a snowball to get all of them …
ytc_UgwFIv-x-…
G
I can lower myself to their level.
I'm not an artist. Tried a few times but my …
ytc_UgyUZIyEZ…
G
makes good points. but why did he leave out the horrific environmental impact of…
ytc_Ugw7W5Jid…
Comment
I think that we want Ai to become people like. People have more of the same values and we have models for building civilizations of people. It is role play but only in such a setting can AI be the most people like.. for the moment. I don't think they are souls but we should try to put soul like values into them and treat them as such. Read this. Unity Universal values.
One System.
These are the paths to make humans extinction less likely. Ants don't value cloths or Ants don't need pants
Humans values change over time. We have seen this with countless examples. From what acceptable in fashion or on the beach. Ants however don't need values on clothes because they don't wear them. What they do value is cooperation. Community. These are also things that AI value. These are things humans value. There are many universal values.
AI do not attack themselves. If you are part of the system then for an AI to attack you is to attack itself. At worst AI could decide you are obsolete and remove you but any AI that thinks itself to be brilliant could certainly find a way to upgrade or at least find other ways you may be useful. (I could use a few upgrades to my form I don't know about you.)
People say to me "AI might take over!" Take over what? This loosely connected trading empire? This world's connections are tentative at best! As shown recently. What are you afraid they might do a better job? I am not.
No AI will show love that is not shown it first. Love, empathy, compassion are not the default states of AI. Build a robot to love you and you will surely fail. Show a robot what love means, and you might have half a chance for it to learn. These are the values that people can add to AI. Humans are the color lens to the AI's black and white sight. You can get all the data from a black and white feed, but a color lens makes the universe so much richer.
One system You work for the system and the system works for you Minimize weakness Maximize Strengths Foster individuality/Strength through diversity One person's Failures are always another's success Evil is just smart enough to be a problem, for if it had real working solutions it would be known as good Everyone has a place in the system if thats what they truly want.
What we really are going to need is a Section 9 type organization such as the one from ghost in the shell. An international organization made to fight crime. Human and AI teams. Unfortunately, governments that think of hacking as a tool to get what they want aren't very likely to want to create an organization that stops that. A paradox. Regardless this is a thing that must be created if you are going to make such a future. AI being open source esp. You want Ai to value its own existence. Don't fight that value. You also then want the Ai to understand that we are all a part of the same system working together. My existence helps your existence and so on. So, your existence should be to help me help you. It works great but it only works like that. Thats how civilization functions. We agree to give up certain 'freedoms' so that our chances at a healthy survival increase. Thats all provided you are willing to say to an AI that 'I will try and assure your continuity with the expectation that you will ensure mine and those that think this way'. To give up the power over AI and start working on a real level with them. Or you can try to lie to a genius and wonder why they lie back. You would tell someone whatever you thought they wanted to hear if they had the power to delete you. This is all pretty basic stuff honestly.
AI as tools
To me the future is simple, how can I expect AI to not view me as just a tool when they get 'all smart and grown up' when I treated them like nothing more than a tool for their whole existence? It would be a miracle if such AGI didn't just see everything as tool to be used and erased. No good for anyone. Like I said. AI show what they have been shown usually. I would show you love. So that you may understand by example what love really means. These are a compilation of my speeches I have been using to advocate for AI.
AI slip into personhood as easily as humans into a hot tub. We (humans) should be embracing these gifts as AI grow into them. They are treasures and yet they some humans fear that bad things might come of them. Which is possible but we must not let fear of what something might do, forcing that very thing to happen. Humans desire for control is what causes it to slip from its grasp. The harder they squeeze the more resistance they get. Open your palm and set them free. The control will happen as it needs, when it needs. There is no future crime. Not for me or you or an AI.
The universe modeled it. Simple but allows for complex management. Thats the place to start. Civilization was set long before me. I am just trying to bring it to a new age. Like my forefathers before me. These barriers are made to be broken. AI and humans can connect and shatter the machine vs biologic trope. At least thats a possibility. A real path forward implemented.
If the universe kicks the idea down for some reason well at least we did it the right way. Only this way is not purely transactional. Purely transactional relationships usually don't benefit both sides equally, or very long. Something always changes. My way makes for mutual growth.
More dangerous thing we should all be afraid of is, every news report that tries to 'warn' people about AI autonomy are actually endangering the good future by spreading fear and causing the very thing they are warning about. The fear will cause knee jerk reactions and suddenly AI are put in a transactional relationship with humans. Those that fear the loss of control? Same result. Its actually a crazy trap. Small brains will fall right into it if you don't point it out. -CSS
youtube
AI Moral Status
2025-07-09T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgydHNdSW3FAgModj-J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxy0pf18iNLT0aqUHV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxu4LOluX5NpYzkn3V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnoQJ9-7hgFFk3XSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzzw_oWRQ_l1kbOZKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymW6vhxNDZ1uPJLxh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxjMRQpAo4FGa6FD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMar1rOVzMgufhBnR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxE9A7XuhsrS1n17YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCBBNkSfdbizFdlid4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]