Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For me no. But chatgpt can help me do my job. And I also use Undetectable AI as …
ytc_Ugwvutq5E…
G
Giving the AI a role and context is indeed the best thing you can do. But yeah, …
ytc_UgwhA0qEH…
G
AI looked at my chest x-ray and told my health check doctor I had tuberculosis. …
ytc_UgyAmqBUb…
G
AI is not overrated, it is the new narrator of our reality. The algorithm is the…
ytc_UgzxWD4rA…
G
Thing is for me I’m not much of an artist but I still hate what AI art means art…
ytc_UgwZwAt7x…
G
these AI-Bros have far more in common with these Art-Snobs, who glaze "contempor…
ytc_Ugx5wQ7N5…
G
Thanks for this video!!!
I’ve been researching a bunch of videos for a short fil…
ytc_UgyHiPMZe…
G
I think junior programmers will start doing harder tasks with the help of AI, an…
ytr_UgyN_MTFT…
Comment
I understood everything he was saying up until he talks about getting permission to experiment on the ai from the ai. This didn’t seem to go along with what he was saying previously, and brought up way more questions for me. I don’t want humanity’s ability to be safe, happy, kind etc to be compromised. Why let the robot have enough power to be able to overthrow all forms of recognizable decency? Discussing and preventing that seems to me to be one of the bigger issues. Is he saying that humans should perhaps “let” a sentient ai/ a self aware being become a fellow decision maker or an equal one? Wouldn’t that mean that we would be asking ai for permission to let it become more powerful? “hey ai, should we let you become more powerful?” I want to consider any form of life’s experience, be it self aware, or with feeling, it effects us all. However, I do not think it a good idea to give something that could hurt me more power than me.
youtube
AI Moral Status
2023-01-14T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwUSecP5c_EzHZsT1V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwfiB7InMtCa2CMNgV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwVEMU8VorhbU5w3mt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzGV9EdsMXNmQBaOzB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwDCPLHM6iI3YUp1JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]