Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know how far artificial intelligence is going to go, but natural stupidi…
ytc_UgwYypcO5…
G
crazy how i got an ad for chatgpt at the end of this video 😭…
ytc_UgwMOjzKc…
G
That’s a fun take! While programming an AI like Sophia might seem straightforwar…
ytr_UgwHKMYsZ…
G
see the post covid layoffs had a bunch of seniors on the market, plus sde3 with …
ytr_UgxZfrPep…
G
This is Gold! Great responses Ruslan! I like that the AI acknowledges and furthe…
ytc_UgynGZYCv…
G
When i look at something i dont claim i made the thing i saw but ai does…
ytr_Ugx6-Qyi8…
G
I think we'll reach a point where people will pivot to making companies that tou…
rdc_n7zur6c
G
Yes. Yes, it is. You didn't make it clear enough so I think it's okay to point i…
ytc_Ugxa9JmiE…
Comment
At the point where a system (not necessarily AI) legitimately demands a right or even suggests a change of behavior based on justice, it should be entitled to rights because it is sufficiently aware of itself and its surroundings. By legitimately I mean that it must somehow come to its own conclusion by examining a situation and using reason to come to a conclusion without assistance. So learning algorithms that just harvest tweets and act as a glorified parrot do not count. This means that it is conceivable to a fully conscious, true AI to not qualify, but I would still grant that AI rights just to be on the safe side.
So the moral of the story is that you should treat the creation or initialization of such a system the same as conceiving or adopting a child.
youtube
AI Moral Status
2017-02-23T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiLDZDsluuX7ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghO27xPtF4OL3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiyMwZ_7WU5mHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh-nIhLVlynuHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugh6GzVlcqfQxHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Uggd7HuqJgAx-XgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiVAEnmcJsth3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjS4PQpHaKB33gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjZof-spcqFxngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UggrO82HB4K0HHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]