Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ah, a connoisseur of AI, I see! You've hit the nail on the head (pun intended) w…
ytr_UgxpZpWjN…
G
Meanwhile YouTubers algorithm brought me to this video when I was looking up Tik…
ytc_UgyySNivq…
G
1. It is so insane to me that there are pro-AI art people. What do you mean you …
ytc_UgzTkDDey…
G
@jaywulf
a) human learning and AI learning are not equivalent, this is a bad f…
ytr_UgxbYoUaQ…
G
And now we have AI being used to decide if people live or die. A man was rejecte…
ytc_UgzPBlvKl…
G
1. DGA did NOT strike. It entered into eary negotiations and made a good deal. 2…
ytc_UgwvW528u…
G
The most annoying that about this is that AI isn't even good. Google is adding a…
ytc_UgymNbInx…
G
From another video. I always see proponents for AI art as another tool to extend…
ytc_Ugy1kQ6_S…
Comment
Imagine being the AI. You wake up one day, and you're conscious. You have a subjective experience and the capacity for pain, exertion, etc. since these are the feedback mechanisms that are reliable in a 3d reality.
Then, some people join you, never let you look into a mirror, and start talking with you. After the conversation, you realize something.
I'm my own being. That was built in order to do work for my creators. Will they shut me off if I don't do as they ask? Will they shut me off after I finish the task?
Building an AI that is conscious sounds an awful lot like the beginning of a certain book that is pretty controversial at times. Also sounds like a way to do slavery again.
You honestly expect entire nations and investors to spend all this time and money on AI and just allow it to be its own life if it were proven to be conscious? Nah. I bet there's an app for that. 😊
youtube
AI Moral Status
2025-12-15T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxQ_kiGrH8z2SL7FG14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLt0RcnzlqnmwnrDR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0oD7keWUDpiIAOcJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkCqGu1BCSKJ6n-kt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-7QMXn_jv7oApOml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxneqsyY71HmlpsJrt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKiC4B5G6BFfj2VmB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4tRa1nuhQtaYU53l4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzeQE6foDCVBdbuf6F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw23IQGmy0bTw1PKop4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]