Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You could make a pretty good argument for the fact that simply having an oil or …
rdc_ibe92ax
G
....you are not born able to draw! I am so sick and tired of this "you simply ha…
ytc_Ugy12vT-9…
G
True. Is it pretty? Sure. Is it unique? No. Humanity took years to reach the lev…
ytc_Ugzt8mdoR…
G
If you are depending on public opinion to keep you safe you are dead. Many peop…
ytc_UgwQCdoWE…
G
In a 2024 interview with The Algemeiner, Luckey described himself as a "radical …
ytc_UgxXr-oLI…
G
I Love AI but this guy is an absolute P.O.S you could literally read his energy.…
ytc_Ugyut24s8…
G
Hi there! The mention of SkyNet definitely brings to mind some intriguing though…
ytr_UgyiiLPxM…
G
LIAR! HES LYING GUYS! HES LYING! CHAT GPT, AI, CANT CODE SHIT! IVE BEEN TR…
ytc_UgyO66Ex1…
Comment
It's a pretty clear answer. If artificial intelligence got to the point where the entity became sentient, then it deserves rights. Imagine if you were unable to feel pain or fear, that still does not mean that I should be allowed to harm or kill you. Also, I think it's quite obvious when we say sentient, it's fair to assume the AI would possess some kind of survival instinct.
youtube
AI Moral Status
2017-02-23T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiglmrSOaC-V3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggcVwGpN4yVdngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghMjRhW38shAXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjoVoZqTEOe13gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiNylxDqi2bZ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiI_3TVpi3Nz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugjfwf1Bv-_gf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjgEocxIvWv6XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjgDdTu4uD0RngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggLE8qEuXr8zngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]