Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are wrong ai art isn't close. To the fountain. That would be programmable a…
ytc_UgzCs2xAU…
G
Sorry but in what world can school kids just generate images like that of a lite…
rdc_nvsfcjk
G
Great talk BUT what is not true is that creative people hate AI. I know quite a …
ytc_UgyJNghv7…
G
Robots in year 3000 will be dangerous they will try to destroy all humanity but …
ytc_Ugy3B4Kgs…
G
AI technology can serve as a helpful tool to save time and enhance productivity …
ytr_UgzvpEMLj…
G
This comment will disappear into the void but hopefully whoever needs to read it…
ytc_UgyENKaRq…
G
I been doing the same thing, especially with the speech 😂 but then it dawns on m…
ytc_UgwLXW6su…
G
Guys do you remember when speech to chat first rolled out? Your grandpa talk-scr…
ytc_Ugzzp4FQR…
Comment
@lomiification You have a misunderstanding of what alignment means in this context. An AI that makes deepfakes is not aligned or misaligned in any meaningful sense, it's very simple compared to the AI models that people are concerned might be misaligned. An AI that makes fake videos of people committing crimes without the user asking isn't better than an AI that will do it only when asked, it's just less useful. Whether or not an AI can do what you ask is about its capabilities, which is a separate thing. Alignment instead refers to whether an AI will try to do what humans want it to, almost always in the context of general AI models like LLMs. Unfortunately, nearly all of the money and research in the field is going towards capabilities research, making AI smarter and better at doing things. There's a lot of research on making AI better at biology for example, but not a lot on ensuring that when we ask for a cure for cancer it doesn't give us a deadly virus instead.
youtube
AI Moral Status
2025-10-31T04:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwf2zF_xWkRggRi-X94AaABAg.AOw2A9e-onHAOxT_sXTEu7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxZm2WJibEPTyCvE1x4AaABAg.AOvxD8jYdhlAOw3q8hoOdJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy2k2xFGP9gDYywYKh4AaABAg.AOvs7JHmu_wAOvvaaNQJ-N","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz6Pt_A9K6iBockeqF4AaABAg.AOvrDnD-1uCARjwhVb1Ij_","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz-_lMNf5m98fTgUux4AaABAg.AOvn8jkMIP7AOvrjV8GlR2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxNSpsc9xXpxxv-FSF4AaABAg.AOvmjoUJUYxAOvtcKqB6gx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvptoJvbBS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvv0cOJ1B6","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxdGAOj0gQkipSK2Ml4AaABAg.AOvlcnF6hFlAOw-xfix-wo","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugzhzx6dO_u1tTU8ZIp4AaABAg.AOvlLnMfxZzAOvrfXGCOu6","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]