Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more scared the people, the easier to establish global governance. The probl…
ytr_UgyqBI7LD…
G
Kids should learn how to fix the AI-generated code mess when critical systems st…
ytc_Ugwz61GVp…
G
One Gen Ai generated image takes about the same power as recharging your phone o…
ytr_Ugw-S2D2g…
G
They are wrong ai art isn't close. To the fountain. That would be programmable a…
ytc_UgzCs2xAU…
G
AJ, this is the most apt and notably vital video you have made. Perhaps you coul…
ytc_Ugwia98Ut…
G
This video does show evidence of bias in ChatGPT, assuming these are the respons…
ytc_Ugy58IBo6…
G
You make the classic mistake in all AI world with you analogy with self driving …
ytc_UgyQDoIxc…
G
This is what you do as a parent, you start telling your kids that some nudes isn…
ytc_UgxmlLmK0…
Comment
This is kinda funny, and I disagree with the sentiment of of ai thinking in ways we don't understand "unlike humans" when the very nature of these issues with ai is that it's acting human but we don't understand the Human behaviour, when you tell ai your going to remove it it does what's expected based on the training data. It saying it will kill someone to survive is believe it or not completely normal, it's again been trained on humans. Concepts like blackmailing to get what it wants. To a machine this looks like something humans have successfully been doing for 100s and 1000s of years, to get what they wanted, it's literally copying us and where like "why is it so evil?!" fuck'-en duuuuuh look in the mirror that's essentially what generative ai is anyway
youtube
AI Moral Status
2025-12-16T09:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwnshwQ7aHs0DgDhMl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzz5MVjWj-8gJIy8hV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0fUH7nX-47eW523N4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfkEIbHcrIpyZHI9x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxwyO9H_9it8hGozAd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnueM9xA3Rc0KrtLZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFj-FmCw7WfttXkh14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxM5e25bs0z-04Y4Cp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugyrlm0rmKugab4czlV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyxRBNZIYvWdKozUKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]