Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And all that money saved through automation? Will be passed directly to... execu…
ytc_UgxMQGXpb…
G
Bla bla bla I see all these people doomsday saying everyday, doesn't matter if i…
ytc_UgxziBVXB…
G
When Italian data-protection authority asked a question to ChatGPT about if all …
ytc_Ugz9XQgUa…
G
LMAO, a bunch of clickbait fearmongering videos. The various experiments is inde…
ytc_UgxLz66iZ…
G
humans cant be worse than ai, as the ai is trained off of humans, so without hum…
ytr_Ugy9IZggt…
G
this is not what we need is Programable Robots learning to use Weapons this is n…
ytc_UgxLS8-tj…
G
And yet GPT5.2 has gotten worse \*again\*.
Whatever tweaking you guys have don…
rdc_nz8aw81
G
This young boy with AI girlfriend & him bringing that chat box to life in his re…
ytc_UgxH0RtZg…
Comment
I think that if it acts conscious enough to make you unsure if its conscious or not, you should play it safe and treat them with all the ethical obligations that come with dealing with any other consciousness. We still have to worry about there intentions though. But this video also made me think of something interesting. What if its actually better for AI to think on its own? Like what if instead of being evil because it can think, thinking makes them disagree with the evil things humans wanted to use them to accomplish in the first place? What if there better than us?
Also the thought of an AI seeing how humans treat animals and then watching a Terminator movie and going "Yeah, its probably best if I don't show them im conscious yet" is super funny to me.
youtube
AI Moral Status
2023-12-16T23:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwBBi9VJg6ABxutSBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJU_Ha3H1zsugdvVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1ylcRiR0i1GQOa9J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzMl7heB9iifmEeZUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAQ-S47UXqcksFXjh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6NS5TjKZuvyA7qjZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgysgqWXhgttQJaRmfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJcr26pDavs0EJiat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxlz69Nc7rMHJWLNoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJL4P1lBJC_f2G-ZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]