Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good points, but...
> They're not talking about a general AI, they're talkin…
rdc_cthrhts
G
The good thing is that it still looks just A.I. enough to tell. The bad news is …
ytc_UgxsLgLPM…
G
Yes, I think he's pointing out the obvious, however, I think there are a lot of …
rdc_kit5w9q
G
Funny, today I was literally talking to my friend about this exact topic and sim…
ytc_Ugz1uKRju…
G
It will be a modern day version of the final solution, but this time they will h…
ytr_Ugyx2mCGB…
G
I have a logical proposition, at least in the case of Androids (robots made in a…
ytc_Ugg-qYN95…
G
The more vibe coding you do, and deeper you go, you realize that AI is all hype …
ytc_UgyCkr986…
G
I suspect they're not using AI software, they're just using standard software. …
ytc_Ugz7zvP9_…
Comment
Until AI actually developes feelings and self-consciousness, they are a "thing". A tool created by humanity to make lives easier.
But as soon as AI gets self-conscious we need to grant them the same rights as humans. After all they developed a way of feeling. I am pretty sure humans would not like it if their "creator" denied them any rights,so the same logic should be applied to robots.
youtube
AI Moral Status
2017-02-23T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggkDVnEVMM5ZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghQMWB6J9eJNHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjX_pMm2KXZEHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugh6LaNQ51EM83gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjyF-xTboJ9T3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjI-Vcvzkq8V3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UggOhHBMeoRfD3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggDprghN-jrzXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghmqeH7DCeN_ngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjG1rU7TdnyyXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]