Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The irony of all of this is that this video is probably generated by an AI (((…
ytc_Ugx263dRX…
G
Jesus Christ every good sci-fi movie in the past 30 years is coming true. It’s …
ytc_UgwRXFX4u…
G
Well you literally said you had “coucil” so im guessing it’s because of that and…
rdc_jklwmb1
G
It's based on your personal preferences (e.g what you mostly click on.) You are …
ytr_UgxJQ_gzZ…
G
I know an acquaintance who claims to be an innovator for using A.I. for their wr…
ytc_Ugw31-zDV…
G
Interesting interview! It's fascinating to see how AI responds in these situatio…
ytc_UgwlzTpZc…
G
I think people would rather fight Terminators than fight each other for scraps a…
ytc_Ugw7cxG8E…
G
I think the main reason they are mad is that they only see the outcome. They don…
ytc_Ugz0Lg8m7…
Comment
My opinion on this:
First gut reaction is - NO, here we go again trying to humanize things.
After watching the video - If they achieve a TRUE self aware AI then it should have "robot" rights that are not similar to ours, like in the video our rights are based on human pains etc etc so theirs should be based on robotic (for a lack of a better word) "pains"
In the end I think that Humans try too hard to make things 'human'. No matter how hard we/they try the robots will never be human and in my opinion they will always be and are tools, no matter how smart.
What do you think?
youtube
AI Moral Status
2017-02-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugha-oJt_DsgWXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiJNQy1_UpMX3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgjTxWp0UNLVk3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggPw_bN0ng11ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiRC98B3aBkr3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiKrWdtOG_Tx3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghJw5uloiiWqngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghfigvnGzz6L3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgiLxWdsHUjz6ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg11ud6zdAB_XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]