Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Technological disruption is not the problem. Evolutionary succession is. AI is n…
ytr_UgwNl8zT-…
G
There is a new season of “Black Mirror” recently came out. The very first episod…
ytc_UgzboWgk6…
G
Ai as a tool is a common argument. The thing is: whenever you hold a brush, the …
ytc_UgwKM6onD…
G
@savnet_sinn We dont know the source of human conciousness (at least with empira…
ytr_Ugxtn1dKt…
G
There's nothing necessarily wrong with that, but as I stated in another reply; u…
ytr_UgwhOsaM8…
G
@RedQuill13 can you write a new play using Shakespearean styles and imagery? Be…
ytr_UgwSswaQ7…
G
ai artists should always state they are using ai and copyrights shouldn't be pro…
ytc_Ugz2yVqyS…
G
No AI is pretty stupid. It can't even write a novel without, having dozens of re…
ytc_Ugz2AcnBM…
Comment
i think people are focusing too much on self awareness. By morality experiments it's already more than proven that if AI gets a mission, like solving climate change, and it goes out of hand, and finds that the best way to complete is mission is by getting humans out of their way, once it's able to activate other AIs in secret to do this task, it's already too late. We can put a self destruct button on one AI, not hundreds of AI with far more complex thinking except even more dangerous than if they had awareness because they have no morals or reasoning.
youtube
AI Moral Status
2023-08-21T13:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyw50kPMI4YgscOJ_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-nl07SxmJIyZI35t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh0PFTej62WD8lyw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgztYkyW4Uh0z_kLyNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXT6e9HG6TfcPdEp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOGLxhwuP0Ig9o8nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5aN6AmK1JMv55cat4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyLSSHsXlgN8nsniAN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9gDx75wKssNpdpw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhU3zbx1Vch_hq8rd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"})