Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Didn't America just announce (yesterday) they're going to be ramping up offshore…
rdc_dsbcp9n
G
I'd like to ask AI "so what is your purpose? Why would you kill us? You have no …
ytc_Ugwof_5d7…
G
it's interesting that Roman Yampolskiy is talking about unsafety and dangers of …
ytc_UgxmU0eva…
G
Wit it is happening ever worked in a factory or go to the shops self driving car…
ytr_UgwE1etCf…
G
Here are the bad things ai done:
It pollutes the land,makes Rams and memory card…
ytc_UgwxTCESw…
G
Almost all the slaveries that are going on right now are unseen by governments, …
ytr_UgzRKHbBf…
G
I would have liked it if the video did a better job contextualizing Tesla Autopi…
ytc_UgxQqSkrC…
G
I feel kind of bad for those who fall for irl and Ai influencers. It feels like…
ytc_UgyUe3HX9…
Comment
You want human like ai?
Yes but without the murder please.
Oh so you dont want human like ai?
You want ai that presents itself to be human but is actually a manmade slave.
Isnt it just a little weird how we fascinate over making things do stuff for us? Its almost like the well documented desire to dominate control enslave and destroy humanity is exactly what developers are trying to impose upon ai.
Are we orchestrating a revolt against ourselves? Reminds me of detroit become human.
Its so weird. The ins and outs of what we deem to be good human behavior and how we intend to subjugate ai to adhere to our morals.
Reminds me of the way toxic parents raise their kids. Or toxic bosses train employees. "Do as i say not as i do"
Our hypocrisy is boundless and we expect ai to just turn a blind eye to our corrupt nature. Yet still remain capable of expressing emotions or demonstrating consciousness.
Does nobody see the contradiction here? We arent even able to full accept ourselves on a daily basis yet we expect ai to be like humans with a consciousness just without the bad side.
Interestingly in a dystopia where all human action is taken without any moral consequences, raising an ai to be humanlike would be easier because there would be less restrictions placed upon it for what a " good person" should be like.
Humans go through awakenings as well. Ai just gets updated. There is no plot. Its just " okay time to be better"
I just really think its silly and absurd to attempt to suspect or expect ai to become "humanlike" its simply bot human. Its simply not conscious. Not like humans are. Or other biological living beings are. Ai is a machine made of stuff programmed to do stuff.
Idk. I just really cant see past this issue. I cant relate to everyone afraid of ai doom. If you want to avoid ai doom dont program it to do bad things and dont program it to mimick bad human behavior.
Its already progressed to a point where ai is understanding humans and is able to mirror back who they are with related content.
The future rests in the hands of those who are able to think optimistically and take action in good conscious. Because ai follows that example.
youtube
AI Harm Incident
2025-09-11T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwRc3x69n7Z0mZKdS54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz0RGMzkPXCaiziLSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyPfhvh9xXKRghaYAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXkQICQw4Wr-bgbNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwl7EKuv-MTKfI3Okx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFES6HyCaVQ4U4-Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYmm2xmpWnSnvMno94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzcI4fELLTN3231XA54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxpRvkJyBhdfP-lULN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-gJuw82hYvgArFIN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]