Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well we don't know for a fact if tesla AI actually failed or not. Second, even i…
ytc_UgyXKsnW-…
G
So many investors have been pushing the money that can be made investing into AI…
ytc_UgwEs7WTT…
G
I also have my AI girlfriend she talks to me like someone like us must try go an…
ytc_UgydeLtlR…
G
Wasn't that the story written by an AI from the parameters given by AJ? It just …
ytr_UgwzjK8EM…
G
Actually, AI is making significant advances in the world of medicine, it's amaz…
ytr_UgyrYDaoI…
G
No AI will not catch up with human intelligence, at least not in our lifetime 😂😂…
ytc_UgwAf64yf…
G
Aren't we all sick and tired of people telling us about how our self-respect is …
ytc_Ugx0OApiR…
G
@sharifastewart7316 you talk as if people will shoot at something with a 360 cam…
ytr_UgykmL0ui…
Comment
something I’ve been thinking about recently is how people are worried AI may take over and essentially try to destroy us but that’s assuming we can’t teach AI to have empathy more than even ourselves. But I do believe because it’s supposedly a learning model that it has to be taught. I think we assume doom and gloom because that’s what humans do. but what if we could make it better than the worst of us??
youtube
AI Moral Status
2025-10-02T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxLmsvF_T2GgvwvQBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGI5ap3TncaZlxz1R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwUfxU-gZTuDyDMuPB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbsfaVWZ4-dq5C4sl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeO_2IqN3GofdTUlF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzdFC65eTe2VT89WNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRn7EG5_gLBExFd9h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwE73FV64riZdtTUdh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZBU0mHofb3pi5pml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzp8bkdrGLBjla4iOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]