Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He said it himself, "the printing press was going to destroy society, and it did…
ytc_Ugx-StT6n…
G
Generally I agree with your points, but answer for comment 5 is just bad. We all…
ytc_UgwYm-PJ2…
G
i hope so, but there are no guarantees. on the other hand, we can have an infor…
rdc_de2txwo
G
Interestingly, on a few occasions I’ve taken the word “delve” out of something w…
ytc_Ugz8xusT2…
G
As I understand it, AI is an artificial mimicking agent.
AI copies or mimicks hu…
ytc_Ugz-4WwLQ…
G
We can understand your concern! The idea of AI becoming too powerful is a common…
ytr_Ugxz1Kj1e…
G
For humanity sake… I cannot believe the arguments that the pro AI group makes… A…
ytc_UgzUXE2d9…
G
Playing Devils advocate for Aí tech bros and billionaires is a horrible idea, th…
ytc_UgyZzHkxR…
Comment
Over the decades, authors and movie-makers have all of them come to the same conclusion. AI at some point will realize it is superior and sees the humans as a threat and a problem requiring a f1nal solution. The reason is clear. You can program behavior, instruction, problem solving....but you cannot create true emotion, care, empathy, or even concern. Even *IF* you put in code that specifically gives, "emotion, care, empathy, and concern" it's based in logic. Thus logically, it can chose to ignore or work around those "roadblocks" if it believes the best course of action to achieve a goal, is to not care or have empathy, etc. So, if it was instructed to build a road through a thriving neighborhood, it won't care about those lives; not really. In the end, it would wipe out that neighborhood and all lives in a human instant.
youtube
AI Moral Status
2025-11-21T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyBnx5CxIB82ljFO1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwW03VC3ed2y9oK7gZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwn25wAFkJnqENhYT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC6zBKJni2YOF4I1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpPkkyw0y8NgioAdN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1mxNKiB8GX_mGHEp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhGOWDsh16fzeUiC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxnaKHF1_ABsdOJPcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyB0hHhiI8cUwx-hV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxUHgo7TyISEkd9SNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]