Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The US lately exports toxic amounts of insecurity and instability more than any …
ytc_Ugywb_FRW…
G
Racism is for the poor… the real winners here will be the tech bros behind this …
ytc_UgzmcWq8T…
G
I am a A&P student. Went through and made Chatgpt answer questions.
It was wron…
ytc_UgwUdhl5Z…
G
Wow… I think most women are just like this… you pull back the skin and they aren…
ytc_UgycZoXUv…
G
Yeah it seems weird to be sad after you prompt the AI to make it…
ytr_UgyXykbkV…
G
I like Elon Musk. He's the most efficient or the best robot ever fabricated. Wat…
ytc_UgxTlzCF8…
G
So much of this looks very fake. I think AI is one of their latest psyops on hu…
ytc_UgwLgUt6d…
G
Of course they will be replaced. All executers will be replaced. Some might stay…
ytc_UgyeFRIVD…
Comment
Can a person can be an AI researcher—working on developing the technology that could end our species, that many people 'in the know' believe has an extraordinarily good chance of doing so—and also be a good and morally sound person? The two seem mutually exclusive to me. Doesn't _knowingly_ working toward what you believe to be the end of our species necessarily make a person amoral?
What do you mean AI researchers are horrified of what they're creating? Then WHY ARE THEY DOING IT? If it's so bad, then stop. Quit your job. Or better yet, don't quit, but sabotage the AI from inside. Prevent it from coming to fruition. Create a secret coalition of concerned AI researchers who work together to ensure AI fails. Yes, it will cause an enormous economic recession if AI fails, but we have a much better chance of surviving an economic recession...
youtube
AI Moral Status
2025-12-14T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwCyFql-xTJYqR4N0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiFMK7f0OIHwYvTKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2plT7wtMXnZ0BBOp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4xmfE4FvE8KcTwXt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxwUiAvnadcTQ5eZxt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZaYmcBNC4A63CoX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdxFj-jqQJYz3C8XJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFJfHdQdFCFf3uVrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwF19DDDUJTptvvLHd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRVMB37V5eNQbMelF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]