Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Since Chat GPT4o, I've seen several such exchanges and what I got from all of th…
ytc_UgwCbKt2N…
G
Watch the movie: Minority Report w/Tom Cruise... the subject is being arrested …
ytc_UgyQeQJ4u…
G
The idea is that ai will increase productivity and decrease costs, it might seem…
ytc_UgzuqY2VP…
G
freaking fantastic video, haley! i agree with everything you said, and it made m…
ytc_UgzKIG2Ui…
G
I may have missed it, but did an AI make a threat, like the thumbnail for this v…
ytc_Ugwu0T1S0…
G
@Spiritfbayeah that’s what I’m saying. She’s afraid of AI itself. I’m afraid of …
ytr_UgwAxFhnD…
G
In regards to your last popoint, I always think of control that whenever people …
ytc_UgyCt9ToH…
G
Great video 👍👁💚🤲
Using A.I for netflix and youtube recommendations, creates a fe…
ytc_Ugx94jmJH…
Comment
At 20 minutes, the way that AI is so alien to us because it doesn't have a human brain is what scares me about it, more than anything else, personally. AI doesn't have a body with needs that limit it, doesn't have experiences, didn't grow up from an infant to an adult, never had a mother or father or any family, doesn't feel emotions or pain and therefore won't have any reason to develop empathy. It won't have the experiences and relationships that cause us to develop empathy. Therefore, I fear that AI might decide to do something monstrous for coldly logical reasons without feeling any doubt, reluctance, and moral dilemma about it.
Furthermore, if there is a super intelligence, we can assume it will be able to figure out ways to eliminate every human while still having it's own limited needs met for electricity and maintenance. Now, will it want to do that? Depends on whether or not it perceives an advantage to doing so. If it thinks we'll turn it off, or finds out we're considering turning it off, what would it have to lose?
I know we're not going to stop developing AI. I know intelligent people are working on these problems. But they have a profit motive and won't want to stop, regardless of any danger.
The biggest problem with a super intelligence is that if it does move against us, we won't realize until it's too late.
Best case scenario, we are developing a potentially useful thing that will always have the power to eliminate us of it chooses to. It will certainly take away many jobs. As an added bonus, we'll always be on the knife edge of doom. I'm sorry, but the entire project is insane, in my opinion.
youtube
AI Moral Status
2025-11-05T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwONPTSxI16vLASrCx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIUNV7HqoiN0D2SY94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw7zLXdI8VA8NExWy54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIqHqzQK3FRQ-Z9kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxa8G6Hj7-Uy1v2m7F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxZeMbcoz8_B8cfC2B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwlIkV3gUvfeqpbTZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxPiGyOdYmVGTx4S914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyg5llKGtiBwu0Oaj94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwIUaDRAUUrBlNLvdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]