Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Damn seeing with the rise of AI art and even with deviantart even giving a huge …
ytc_Ugwhii9Jl…
G
Geoffrey Hinton approaches intelligence from a materialist perspective, unlike m…
ytc_UgyATccl6…
G
You guys are filmed everywhere all day long with facial recognition? What are u …
ytc_UgwZGS8GS…
G
It's going to be important to remember that the purpose of software engineering …
rdc_oi15ld6
G
@12:35 he stopped the robot from telling us something that the government doesn'…
ytc_UgyHvBgsq…
G
These are rather stupid "Insights." If you tell a AI or anyone. "You will be shu…
ytc_UgxAyEvne…
G
By enabling and using Tesla autopilot, you agree to intervene whenever the car f…
ytc_UggB25_4p…
G
I don't agree one bit with ai but i think we should explain ourselves to them li…
ytc_Ugyb1td48…
Comment
I would like to make a point, your definition of AI is very very circumspect. As if 2017 there is no true AI, nor is there anything on the drawing board one could conceive of being AI in the future. You are make a classic mistake of SCIFI, you are mistaking virtual intelligences with real ones. With machines programed to pretend they are thinking, when in reality they are just running through a preprogrammed set of instructions. True AI may be much harder than most people think and will require a herculean effort to achieve. So AI rights may be child's play in comparison to task of just creating it in the first place. Another common mistake people make is assuming a true AI would think like a human/biological, for all we know they only want the right to turn Greenland into massive server farm so they can calculate pie until the end of time, because they enjoy that and in exchange they will give us a perfect economy or some other human need.
youtube
AI Moral Status
2017-02-26T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggxuzS4c5UU2ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjYJv9T9YkFhXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UggKdvdoifxIKXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjv8_ZPZwITtHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghqiS4AGQvTCngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UggKdKSmQyWs-XgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggpGkl0EFbTangCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugi2-dOuWWAOd3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjfOOUww9Lpc3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghVOYyM5bbFNXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]