Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy is woke and virtue signaling. This is all we need, convince AI that the…
ytc_Ugzwks3cy…
G
I mean.... i just would of reached up there and hit the power button.... but tha…
ytc_UgxH-TwRW…
G
@freble_clef honstely i wouldnt use for medical stuff since if it gets stuff wro…
ytr_Ugxfld5t-…
G
None of the artists that AI gets it's inspiration from have consented. No artist…
ytc_UgwFyDqU8…
G
I love your videos and I loved working with you every time I have, however are y…
ytc_UgxamjaqQ…
G
Scary imagine they come self aware and the ptsd they could get from seeing the f…
ytc_Ugzb6PuIR…
G
Human artist fallacy is thinking currenr ai art quality will be the same in the …
ytc_UgwUiCVTC…
G
But you are still using it to order drinks , hiring it , employing it , you are …
ytc_UgwvyykvV…
Comment
Thus far, we have been completely unable to ensure that humans are acting based on what is best for humans. And even with the best intentions, we've created some of our worst pollutants and mutated our children with drugs that were supposed to help people, and caused a lot of cancer, and etc... And even if we could control how people think and act, wouldn't that be immoral? I'm not trying to say, "let people do whatever they want," or "Let future AI do whatever they want," I'm just saying that, at this particular moment in time, it doesn't seem possible to me that we will ever be able to control... anything really, but especially AI.
We may reach a point of safety with AI not unlike our weird moment in nuclear history where we are safe(ish) BECAUSE if one missile launches, that's it for everyone. Maybe they become an exitential threat to eachother. Maybe they will regulate one another. Maybe. What do I know? I'm a college dropout.
youtube
AI Moral Status
2023-08-21T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyk60AkoNrsafE7PkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD7TB9IezrJLMfhwd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxppipJBtZVx5L0HAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwtoSTCvYehSflQk1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSlKHSmvlMIQLKmUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8fuDlfM8JtxS7aQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_5qyWVqWCh64-tPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQ5GCvHQzEecPmbFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkKV6Mm2KX3f3Zsst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkkaIreG9nzBAc5BR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"}
]