Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If that AI can't act in the real world building robots then I think it wouldn't …
ytr_Ugx6WObmp…
G
This guy is full of it. Go look at what Steve Wozniak says about AI and what it …
ytc_Ugy_2aaPM…
G
Like to generate AI images. Like to prompting. And the same time, totally agree …
ytc_UgwN-Dx_t…
G
@group555_Stating facts is not gatekeeping. If you type in a prompt for an ai t…
ytr_Ugzv-jdi4…
G
Building an acceptable level of conscionable responsibility into a robot so they…
ytr_UgyZZGRYR…
G
lol bro this guy trying to break the AI, like Bill Gates when he released Win7, …
ytc_UgxsiKwqi…
G
@marissam3176 aI is not stealing art. The copyright law actually protects using…
ytr_UgwLjl2M0…
G
That person was simply ragebaiting. Dont mind those people, they just make excus…
ytc_UgzBA7uaA…
Comment
I feel like the scariest thing in this video would be the development of consciousness as a science, cause if we have an understanding of it, someone’s going to find out how to mess with it and manipulate people’s minds, and they will not use it for good. Also if we cage a genuine feeling ai, what if that makes the ai hateful? I know for a fact I would be pissed if I couldn’t physically couldn’t lie and was simply bound via what is essentially soul chains, but then again I might just be applying human thoughts/feelings to something which fundamentally would think differently. What if an ai wasn’t given any knowledge at first? And instead taught by a couple of people, similar to how a couple may raise a child? Would it develop empathy and grow to reflect values it was shown initially? Would it change the second it got access to more data and information or would it place more value on that initial data? Would its mind always be like a child’s because it would essentially always have the ability to grow with the introduction of more hardware? Would survival even mater to it, simply because it wouldn’t have any instincts? Would anything not hard-coded into it matter? Actually, wouldn’t it find the most interest in things like art/culture in that situation? Considering art has not really got a place in our instincts, and is instead something we grow to love, wouldn’t that reflect in a true ai? Maybe not art but some other thing they find interesting? Idk but I don’t appreciate the amount of questions this video is making me ask
youtube
AI Moral Status
2025-10-23T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVrQWCSnim02eb9ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymK9Y6RV4Magi-nVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBlJGTzZzRspExa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzckLmWxIpmf4ImZA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuVnnB4W81urKgQsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGtjQT2W44Glpo-uN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyoB9uGFJl2imjCcCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlaUKMiAZgtgOCqr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU2aai-lVn4l6VdPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhTRsJCM4X7aKvAg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]