Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, Pro-AIs always mention about the advantages of AI being that it is perfect…
ytr_UgzyKvHg1…
G
"Podcasts could go out of business." I know he's kind of joking, and AI-generate…
ytc_Ugx23ueri…
G
Spooky! i was at fanexpo this year and was a little suspicious how alot of booth…
ytc_Ugx5-Gz0S…
G
I DONT GET THE REASON FOR HATE LIKE I WOULD UNDERSTAND IF HE LIED ABOUT IT AND S…
ytc_Ugz7iKb_Z…
G
The "I" your chatgpt was refering to is *this* version of chatgpt. In saying tha…
ytc_Ugxjx6V7L…
G
I consider myself pretty technologically adept, but I had no idea. Like, I absol…
rdc_h94htx6
G
Why are we not teaching AI our best values. Teach them like we would a child. A …
ytc_UgwXNCEPW…
G
Waymo has improved significantly lately. I have been driving behind waymo cars m…
ytc_UgyuVy7UW…
Comment
If an AI's sentience/right to rights depends entirely on humanity's choice on whether or not to make them capable of feeling pain, then we could just as easily (well, for one, NOT) program them to become the very opposite of what we are.
Humans are emotional beings that feel all types of pain and dislike it very much.
If AIs don't just spontaneously become conscious, then they do not need rights, as we can just make em love what they do or some shit.
youtube
AI Moral Status
2020-09-01T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxMCKqDtgnNfcH5bhN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwYR9O-VGQOcl-5i-p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUNUaUyRx3aIlKu1V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7rZKvgKFGyggcJCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqx8su6wktyNKm1Ad4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-n-E8LayJ88lzLLZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8d1lOlGg65sIDVuh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvpBSM5VZTNVScXLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxq0LqphmBYNKFlsDd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgygZF5E3ttKUqK02ul4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]