Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
come on, AIs are getting better and better at driving...I'd rather be in traffic…
ytc_UgwZp6vqo…
G
If air art was around when I started my art journey, I think it would’ve severel…
ytc_UgwDJVN85…
G
@Puddy236 if you think ai art has any soul or sentimental value your a soulless…
ytr_Ugxw46AbV…
G
Another time.. I seen Waymo left signal turned on but they don’t make turn ..an…
ytc_UgxKuECTy…
G
Intriguing perspective! AI is and will alter, innovate and transform our future.…
ytc_Ugw57WnaP…
G
AI will definitely take us towards wealth inequality and ultimately towards the …
ytc_Ugx2WJc9h…
G
it's so weird.
now we're just teaching kids how to go directly into the service …
ytc_Ugxd1x65h…
G
Wtf do we need ai anyway? Someone blow the place up now! Save us all…
ytc_UgwqMTaVB…
Comment
This whole video is built on the premise that consciousness exist. Philosophy will never reach any form of consensus, but the growing trend is to believe that consciousness isn't a thing, that we are just fooling ourselves.
But whatever the answer to that maybe, this is actually the most meaningless question one can ask about AI. Beiing self conscious is something internal if anything, it doesn't have any impact on anything and that's why consciousness is not measurable.
You don't need to be conscious to be "a new specy", we don't even assume consciousness in most animal species out there.
it feels like "consciousness" is often confused with self-deciding goals in this video, which comes from desires. it all emerged in ourselves from the need to survive and reproduce, something we could potentially train our AI to do, but certainly don't want to. But if we gave an AI curiosity, rewarding it for unexpected outcomes, and for expanding its knowledge, it could certainly produce enough intermediate goals and emergent behavior for us to call conscious.
I call a neural network conscious when it runs in a persistant environement and continuously learns from it (not just during initial training), with self-awarness (introspection capabilities and keeping track of its own states), and yes, those already exist.
youtube
AI Moral Status
2023-08-21T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugw9u4eyvrqOT4sgZQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiXDKDKbnlHsZ-iF94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyv9xWt6CgZLRPI5Mp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEHKOELB_n2rsdR694AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmKWe4RHf3UpSbBJV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7hXRZf0Pz0H68rCR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugws97o1bOtnRxPPR9N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwP_kKxQew89OjLTpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzw2UeKiZELfv5RbkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-w5gPVzf7LjLO2gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"})