Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as much as you may try to say that this wasn’t a test to see if AI could eventua…
ytc_UgzjK5LFc…
G
>professional visual content
too many people have too low a standard for "pr…
rdc_oh673ob
G
We wrote tons of fiction about AI rising up against its creators and then traine…
ytc_UgxwCJHwb…
G
The only kind of AI I want is video game npc AI. (Not AI art or code)…
ytc_UgzWXkAjt…
G
He just wants to delay a.i development by six months so he can catch up. He coul…
ytc_UgyAb84cu…
G
And the use of this ? All of the said things is the lazyness of A humam. Do you …
ytc_UgybNxSpt…
G
Sounds a lot like
Yuval Noah Harari- very SICK, obsessed with HOMO SAPIENS self…
ytc_UgztGwG7u…
G
LLM vs AGI. Big difference between the two. Which one in Open Ai building? Decen…
ytc_Ugxfpfhck…
Comment
I find it difficult to imagine an AI that couldn't suffer. Suffering for an AI is obviously going to be unique to that AI's experience.
Moreover, if AI can reprogram itself recursively, then we have bigger problems to worry about. Robot rights are not going to matter when a super intelligent AI is unleashed on the world. AI goal alignment is the foremost issue here, this other stuff in the video seems irrelevant.
youtube
AI Moral Status
2018-02-11T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwqkL-4OotpcMC9_cR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXbNc2Hg4arHEEMix4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTRxu16GnzRrXp1qZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxD02SVCI74OGI37YR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY76BRGYbG9Jo_FCl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwP4SnuGVfld-YbbIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxbEeUsJVf28uGVuYR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxrDFHsHJ3ivixM8Ih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxedI_ziLJnucOS3Rh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgycamVqfwL1zJuN_DV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]