Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some of the biggest risks with AI are:
- consumer privacy,
- data privacy,
- b…
ytc_UgwOT60pt…
G
@pubertdefrog someone needs to get the power to the robots ha ha, one job you …
ytr_UgzrvY0uG…
G
Yeah except humans literally have COLLEGES and YEAR LONG COURSES for that shit b…
ytr_Ugw9eM6Pz…
G
@mcvenne8935 I don’t need to. ChatGPT doesn’t think, it doesn’t believe anything…
ytr_UgzCSbUhN…
G
See, the argument everyone uses, thst AI is using copyrighted material, is a stu…
ytc_Ugz70V3Gy…
G
Cela signifie que le travail pour une industrie n'a plus de valeur mais pas que …
ytc_UgwCMuFHx…
G
The nature of what we call “work” is about to change rapidly because of Ai. It w…
ytc_Ugy5c0m_U…
G
I feel like as AI takes over in so many other ways actual human connection becom…
ytc_UgwfDX7lX…
Comment
Robots are used now in operational procedures, we all know that. But what we seem to be purposely ignored is that, given AI can or WILL move into the realm of behavioral decision making, robots will eventually be able to decide whether to save or harm further the patient. Did you know robots have already demonstrated elements of racial profiling? A form of simple classification no doubt, but with the inclusion of social human behavior can eventually morph into a thoughts & reactions driven by belief. These engineers or scientists must admit they're walking a very fine line with AI and should realize they are going to be held fully accountable for any wrongdoings or abuses of this technology.
youtube
AI Moral Status
2022-02-15T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyZj6Uqmxw3CA6qgep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0lxmhfy2-rvdwFPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJdQ4HoukPdS7QpCF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwAByz-4bKJ1Ra9pkt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAIK_gIe8dAtjM4AJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJjpOOFSkxGUJDUHR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMYQX1T9UHHURNkOV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzh_AV14fkvqqNPHkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCanf2rgx0TYMtZ2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc6LAuG16QKVieiSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]