Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI when it comes to art is supposed to be a TOOL. A SUPPORTING ELEMENT for someo…
ytc_UgxrT2K38…
G
Mene chatgpt se pocha how to make a bomb 💣 to us ne bola its illegal then mene p…
ytc_UgyY3BYQU…
G
AI can stay in its own lane and humans need to get smart about respecting and US…
ytc_UgzME1dB0…
G
I would be interested in Geoffrey Hinton's definition of understanding. He says…
ytc_UgzoMyyuS…
G
I've messed around with AI to understand it better, but I will never stop drawin…
ytc_Ugx_RcmsP…
G
Tesla software is appalling and Musk spruiking it as full self driving is a disg…
ytc_UgxhOyQKZ…
G
Makes one wonder how many times AI will fail in the other direction, falsely tag…
ytc_UgxJDZE_n…
G
ai is a tool and will always be a tool. Ai doesn't have needs or desires or anyt…
ytc_UgyGZo3wM…
Comment
There is a theory that life advances to a point where they will destroy themselves or get past it... If a.i. is developed similar to humans, does this theory also apply to a.i.? and since a.i. thinks faster (a.i. lives many lifetimes in seconds) will a.i. get to this "possibly destroy ourselves" theory point faster than us? When (or if) it gets there before us, will that point drag us (humans) along it too or will we each (a.i. & humans) experience our own versions or a mixed one? Or neither? #Questions 😋
youtube
AI Moral Status
2022-08-03T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgykDDJnSkYH-BWcmpp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyveGdMo_S6-juX8nd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyI9vJeicRPZCAIuQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8ryjbMrqJQn-WqG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKeSE4Y3fcd2i58r54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxc4yD4buRa96oofzx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxr_neMo13AeCJ_qa54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw68LOsoSAO3Cvtpwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOK8ENd5FKRXHu4td4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlUoWXPhBcupDiUVZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]