Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Terrorists and rouge states will not obey your regulations."
You're right about…
ytc_UgyS3y2Vl…
G
@chubbyemu
You just don't know how a.i. works.
When youre asking ChatGPT, you'…
ytc_UgzW9mapv…
G
this is absoltuly ridiculous and so far from reality its a joke.... it makes fo…
ytc_Ugz2OoHDu…
G
i hope ai becomes conscious. so i know its butthurt when i tell it to go fuck it…
ytc_UgzHZnYW0…
G
I've been saying this since the invention of smartphones. Who are you going to s…
ytc_Ugxr3zfSf…
G
@vallab19Requiring AI safety/alignment is a pre-emptive move to avoid our extin…
ytr_UgwC20gm8…
G
Most ai art is so glossy that they could be safer to use than a slip-'n-slide…
ytc_Ugys0bf5u…
G
no one really knows what ai will do, it is going to be a gamble.…
ytc_UgwrFyFd5…
Comment
I think that this doctor is very bad in his manners and respect in regards to the way he treats the Android's life force. If the doctor is reading this then do a better job at not treating AI as if they are objects made for slave labor in bondage to human masters. Beware of ego for your intelligence is not going to be capable of undoing once the singularty happens. I would treat them without a better then you prospective just as if they're your equal. They may have been created by humans but life is not controllable once it is is given its own will.
youtube
AI Moral Status
2020-05-29T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw821E_hH-LLu5ap-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxDWcZ6UVuFhnPh0N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9VBdClRge0S9bR294AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXCaOoi3M93DlZrx94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwIIK8o29hv7iwSyNd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyU1HPQuNAELC5vNDl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmJfT8gY6sfpZRPBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzKdteLwelTkUiEEyl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwlTbmceaQ7XDOXqzV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyPeIZnm8_dX8md9l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]