Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will make those who use it appear to be smart, but in reality, AI will make y…
ytc_UgyHeTwKn…
G
this girl is clueless, cars aren't going to disappear there is a near for parkin…
ytc_UgyOm-ukf…
G
Tell the shareholders that if they don't listen to you when you say that you are…
ytc_UgxNAge-C…
G
@kole1ful As I said earlier, the algorithms are color blind, and the technology …
ytr_UgykEeDrd…
G
0:40 This is really good point. We as humans naturally like people with skill, a…
ytc_UgyGYw4Yg…
G
Disabled wannabe artist/Amateur writer and game dev here. And despite strugglin…
ytc_UgyGmfc_W…
G
Someone who knows the documentation and knows how to use AI as a tool will be fa…
rdc_kz04b3o
G
We don't even know what WE are. Intelligence is what made AI, but what made huma…
ytc_UgySEgM5D…
Comment
So, the progress ethic pushes suggests that should move forward with new technologies because that is how we change our norm and when considering it, we should not take into account the potential dangers involved? I can follow that for robots and AI with little objection .... but when I apply that to other fields where incredible progress has recently been made I come across several personal objections.
Recently Crispr and gene drives, recent technologies that can allow us to rewrite the DNA and genomes of entire species, should be moved forward for progress' sake despite the very clear implication that that progress can get out of hand and destroy ecosysyems, wipe out entire species, and create diseases that can "racially cleanse"? It affects the status quo in the same way as AI and robots could. Though I think we can all agree that would be bad.
At what point does the ethics of progress need to be countered/considered with a morality of progress?
youtube
2016-06-28T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugi0VpRQcJ-dlXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiNbJ86LvBRq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh1IEiVJXdTL3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghLxJuzfSM6WngCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggsqieRRiXa53gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugg4GbRqmgOic3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UghxMbj5aY2YSHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UghdOZ7MnsH1QngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjiENPkdW5qpHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UghmqNgxlJSGC3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]