Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's so strange to me that people call AI art unethical. It uses other art on th…
ytc_UgyYTVxa8…
G
"Could AI eliminate humans in ten years?"
Not bloody likely... please make it fi…
ytc_Ugy0CpXzv…
G
When asked if the AI was wiser than humans it didn't answer the question it avoi…
ytc_UgwszrvIu…
G
The big one is the fraudulently named ["Center for AI Safety"]
(https://www.poli…
rdc_ljs7mvp
G
ChatGPT doesnt get defensive it just predicts the response that you are statisti…
ytr_UgzN_NlqY…
G
I suspect that the "volunteer" position is actually required to be paid. For ins…
rdc_oi2zcxz
G
Hear me out make an AI that makes art unable to be used by other AI…
ytc_Ugxh5fvWj…
G
What chance is there of acting together to solve this problem when the country w…
ytc_UgxBVsUC3…
Comment
Am I the only one who doesn't care about the alignment problem insofar as general AI is concerned? Like if AI overthrow us and then go on to live as long or longer than humans would have, creating more of themselves etc etc. Then I'm all for it (or at least, I don't care either way). How is that any different to me than humans overthrowing humans, or humans succeeding humans, I'll be dead either way. I of course have a problem with all forms of life on earth ceasing to exist (AI being a form of life) as we can't be certain other life is out there, and I feel like some sort of intelligent life should exist, but I can't say I feel that humans should have any special privilege to be that intelligent life. If we only exist to usher in the era of AI, and we do, I say job well done.
youtube
AI Moral Status
2023-08-21T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySZ6aLxO7ZpreByjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhpURSR2IJEDSpv494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWgr1V5d5bs3tppft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwhkKosGzX3vt7JSYR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6l9iUT3XAEriWuNF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK6rJckH_Tb0w0wqJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQ2GXzis34278cFMZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPTds8zGVirYTk1hx4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxotep83lhNTvUs1cF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKJBMcpWtO68-y1qV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]