Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Look at this art I made, i'm such a good artist"
"That looks like shit."
"Yeah …
ytc_UgxL7vNd6…
G
Respectfully, the premise of this video is invalid. AI is a tool, just like a ca…
ytc_UgxgZCKGD…
G
AI bros have to remember that, some day, the AI will not need them anymore. The …
ytc_UgwwxdI4Q…
G
AI Bro: I call this, Bold and Brash!
Everyone: More like, Belongs in the Trash!…
ytc_UgxpJXwFA…
G
A.I IS AN ARTIFICIAL OPINION
CONTROLLED BY THOSE WHO HAVE
THAT OPINION …
ytc_UgwbN0Zas…
G
Oh man George would be a shoe in for this stuff and think hes got life solved bu…
ytr_UgwG8BgVi…
G
10 years from now, the birth rate will be half of today..... but men will be hap…
ytc_Ugx9uIRXO…
G
Can ai draw them giving birth? I doubt so, 9999 points to real artists
0 points …
ytc_UgxjkCtKf…
Comment
I'm on the robot side of the argument. Be it by accident or intent, bringing anything to the same level of thought processing and awareness of humans should at least give them the same protections we, as a species, try to afford other humans. I mean, this is still a long way off but having ground rules in place for it can't hurt.
youtube
AI Moral Status
2017-02-23T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjuBzXiEFOOb3gCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiOFP9eFj2_B3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggIDwaoyRdOpngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi--P2KZ4P-SHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjaQVZhHCDsIHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugj9Z8KXXidlHXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UggPfF3JrQEbgXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugh-PGzxflxq93gCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"unclear"},
{"id":"ytc_Ugh8FxkHzWzJP3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi5vOQTUgGENHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]