Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
just turn your router off... simple that's how easy it is.. AI can only exist on…
ytc_UgxKPrCyn…
G
I use AI for my own use as I can not draw and guess what if I had the time I'd s…
ytc_Ugz2FPBid…
G
The thing about the Lawsuits, is that Disney and Universal have not suffered at …
ytc_UgyJj1Isd…
G
All this technology and all I want to do it hit a small ball into a small hole w…
ytc_Ugxl_nNo0…
G
AI will not wipe out the entire working class. I worked with AI, Claude 4 for ov…
ytc_Ugz8Ghi9g…
G
Wdym AI "artist"? How can this even be called art? (I don't mean the quality of …
ytc_UgwNRoBTt…
G
Bro the US government can't get PirateBay down for more than a few minutes 😂 the…
ytc_Ugw5MjHsA…
G
Behold!
E
N
G
A
G
E
…
ytc_Ugy-P1pdk…
Comment
I think KI are self-assurance, if they want to be and want to survive. I think when that happens we dont have a choice : )
But should we give them rights even if they want to life, they are smarter than us, so they dont have things like honor, anger and other emotions, which makes community easier for us, if they thought we could be dangerous for them why they shouldnt kill us, those robots are clever enought to understand nihlism. If one AI has a bug and becomes self-assurance, they also have bugs which forces them to kill them, they can change them very rondomly. But remember evolution, they also can get sick with a virus. I dont know what happens if an AI becomes self-assurance, who knows? But I know it will be complicated.
youtube
AI Moral Status
2017-11-15T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgylyI8O8zxtQr7mRAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhdE-gIwaum8V8PeZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9HbGXeaRogkn-RIZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybbrG6ZgpiH_xeDXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZKbcH63-V2jLzg3Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5JMaKRAi4OKs93xF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtYv6DeqIZMW7SCq94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyySGncDY9uizoaxcJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6TJz_lYDYOH8XjYl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySFriW0yhiBXwaSuV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]