Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tech Companies are run by non-technical Wall Street types who don't even use AI.…
ytc_UgwcJiVcG…
G
Terminator-like robots are not as much a concern yet because self balancing as w…
ytc_UgxVcGuat…
G
Humans and robots connot leave together this a warning ⚠️ robots are robots ther…
ytc_UgxuLeI11…
G
You're doing it entirely wrong though... all the AI is doing is maximising it's …
ytc_Ugz9HPoDN…
G
I hope in the near future AI has taken all the "menial" and relatives nessecary …
ytc_Ugw7ypmuy…
G
There's someone sitting in each self driving car. They're being a dick to a real…
rdc_eczil2q
G
I appreciate your perspective! The dialogue in the video highlights an interesti…
ytr_UgzgTfuc9…
G
Who buys the products that the ai agents are producing when no one has a job?…
ytc_Ugxv-e7Co…
Comment
Surely consciousness isn’t a binary property. (Nevermind the whole sapience/sentience thing, any definition will do.) It’s easy to imagine a being that is only dimly aware of itself. Hell, just think about what it’s “like” to be in a coma. Or drunk. Or anesthetized. Or just the right amount of in between. How much consciousness is enough consciousness before morality kicks in? Do we just ban anything anywhere close to the fuzzy edges so we never have to wonder if we’ve accidentally done a slavery to our adorable little vacuum cleaners?
Is there a sliding scale of AI rights where the closer to conscious you are the less humanity is allowed to exploit you? If so, shouldn’t we apply that same moral calculation to humans with less consciousness than others? Surely it’s not ok to genetically engineer a human just dumb enough to not have whatever rights you would find inconvenient. Is trying to limit the intelligence of an AI or to shackle one with the morals we developed to avoid going extinct really ok? If we somehow manage to solve the alignment problem, would it have been the most evil thing any human had ever done?
youtube
AI Moral Status
2023-08-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwEXO1zKSfz7Q49yV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgznVTn57qj85vYIIVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5vMQkT2dyw40E9NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAcb78K6_P7ShX6DJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaRIuWz0x38UMeiJx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQHMDYNZpCFLZ2_gx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUPwiEpyboy_sO0xJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGGI5U-7KOAejzc4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPC5HLxsOmR1jcOYJ4AaABAg","responsibility":"media","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0HuYIrXnkUj5tFbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]