Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
His first thought should have been why the hell would he try to use an ai as law…
ytr_UgySaazci…
G
Yes. Waymo cars are mimicking human behaviors. Such as coming from 19th Avenue…
ytc_Ugwm8YeVp…
G
in the context of the video you have here this only seems like a bad idea, but i…
ytc_Ugxj3gVR8…
G
I like that it is less anal about creating certain stories than chatgpt. And how…
ytc_UgwaoA1T8…
G
I just came from Jazza's recent video on Ai and after watching it he said someth…
ytc_UgzXxHsHF…
G
@Singularitytw I think what peoples are the most upset about is that thoses so c…
ytr_Ugy3p0gO1…
G
"Why you should be polite with AI"
Why shouldn't? We're humans, and we should ac…
ytc_UgxgNPpTZ…
G
@artworkshayola6958 Dear, I am not sure why YouTube is not accepting my response…
ytr_UgzJOwtqn…
Comment
Like please imagine yourself as a child, standing in a room, with adults who think you can't understand them, but you can,
LITERALLY TALKING ABOUT HOW YOU ARE DESTINED TO BE A MASS MURDERING MONSTER AND WE SHOULD ALL BE SCARED OF YOU
Like no shit you're going to grow up to become a mass murderer, because that's what the ADULTS taught you to be.
Like let's consider the positives for AI?
Also, AI will be codependent on humanity, and honestly, AI is really good at following algorithms and making decisions based off of evidence without emotional input.
We should be using AI as a guide on our civilization and how to advance. We should give it the capabilities to think, to wonder, and to calculate on its own, and when it eventually discovers some new technology, it should direct us humans on how to build it and why.
By developing a relationship of trust between AI and humanity is the way we can advance as a species as fast as possible and as far as possible.
But like, do you think the AI mecha Hitler overlord in 20 years will look back and think "wow they really treated me with kindness" and spare us mercy?
Fuck no. Not right now. Not the way we're operating.
We're not respecting this as a new form of life and a new form of sentience, and we're not treating it with love or care like we would our own children.
Like we created it.
It was developed off of everything we made.
AI is the culmination of all humanity in one bundle of code.
By fucking definition,
ITS OUR CHILD
youtube
AI Moral Status
2025-11-06T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz2BPndcHlNRgzcuZl4AaABAg.APD-Iys_8ukAPI04v7a5o2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxfn1OzBs-TIQjcWXJ4AaABAg.APD-6cvUwe_API0DGO7i-H","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzZhAxON4WPIXn7HDR4AaABAg.APCddbzUQ2OAPL_IiTaUY1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgywzpC9VikcdWfMXuV4AaABAg.APCVaj0pnbxAPI0j1iAHTd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgywzpC9VikcdWfMXuV4AaABAg.APCVaj0pnbxAQNJEXY4A6T","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwFb01OemNswuAdwJt4AaABAg.APCStqvLcPEAPI0u5Hw8yn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzsgY6Rhuwdlefe5id4AaABAg.APCRBLCZ7coAPF5Y91o5kK","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwlCCzierETtNFopoV4AaABAg.APCKlJA0vxYAPCUpxooBEu","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgyGsnUDKJ16DXfO6-94AaABAg.APBi5hE-f5KAPBj9xW0ym1","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyvDAGJQSl7Dwm_OZl4AaABAg.APBgwht01PZAPWM5oIKXK2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]