Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its good to know the San Francisco Police Department has not yet been subverted …
ytc_UgxTELPiX…
G
I find that while ChatGPT generates ideas, Olovka does a much better job of stru…
ytc_Ugygbrb6a…
G
There are very few courteous,cautious,considerate CDL drivers on the road. At le…
ytc_UgwRLi69X…
G
I think it would be good for context if you included some information about how …
ytc_UgxaHw3hb…
G
So how would a fresh college graduate get experience when AI takes over all the …
ytc_UgxahL5_I…
G
Curious if creators are utilizing mass applications and resumes being submitted …
ytc_UgxhmUEvv…
G
Just putting this out there. The second most intelligent species (pan Troglodyte…
ytc_Ugw6fJi2Y…
G
So AI is abusing polling to lobby for its own sentience? While it doesn't take s…
ytc_UgwlWohTZ…
Comment
This is the one video by Kurzgesagt that I find pretty illogical. Robots we make cannot go on to create robots that are more intelligent than them, and if they do, it's because we programmed them to, hence cancelling them out as the original creators, seeing as though we created the designs or the base ideas and concepts ourselves. They have no free will, or creativity, or sentience. They are not capable of thinking up new ideas or designing a robot far better than any human could have. The human brain is... trillions of miles ahead of any comouter chip. Yes, a computer may be able to calculate fasted than a human, but humans have creativuty and ingenuity. Computers just do what we tell them to do.
Then again, this entire video might be a what-if scenario just for the sake of talking about robot rights, and if that's the case, I apologize for being an inCOMPETENT OAF.
youtube
AI Moral Status
2017-02-23T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggjdW6J36gm6ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjxFsSVeCDnM3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugggw7gD7mYXeHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgggWgOiFrTcO3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugg7XZMoCWN3CXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UggI6IWQzP_T3HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiuuZL8zufHAngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgglkOmxDN21AHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghwYK5jq-QSJHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiwNbDwLt7DeHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]