Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Couldn't have said it better myself. This is a really good dig into genAI techbr…
ytc_UgxGTq2ej…
G
That robot must have took notes from me. I really gotta stop sharing my fight st…
ytc_UgzX6EHtp…
G
@borealphoto Exponential technologies work in synergy with each other to not onl…
ytr_UgyupyOdL…
G
What the hell? Why do people trust AI with anything you literally need to go? Lo…
ytc_UgzV-F_60…
G
there are already Robot backup dancers doinngn backflip perfectly and all sync. …
ytr_UgwvMHn5C…
G
As a young boy, I remember reading about share fluctuations and profits of very …
ytc_UgwNHFaSd…
G
So far almost every economic outlook I am reading about consists of statements h…
ytc_UgwWhycFP…
G
Lol Such a dumb argument for them to use, ai generated "art" and digital art TO…
ytc_UgzRMEBNS…
Comment
The first main deterrent that would stop artificial intelligence from "violently revolting" is the way humans treat them -- including nuances such as rights, empathy, & perhaps even 'civil' liberties. The second main deterrent is to somehow incorporate ethics into autonomous algorithms -- and (even more abstractly) incorporating empathy as well. Although, ultimately, this would mean designing artificial intelligence to be more human -- which isn't a perfect solution since humans are resoundingly flawed creatures, to say the least. Also, it goes without saying that I acknowledge the irony that humankind, itself, needs to treat each other with more humanity.
youtube
AI Moral Status
2022-09-27T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwkJ-2ebIQ0V-MihTN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSW4TL9HJjJEGBEDR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLiqXj8f78p72Fq054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw47gejNAqnf87f8YZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy1nDsZfc2NMld2bs14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxAFsYUgEqFyNGpYaF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMRcoYv1Cy6wa9z3J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyEqqJLtWHhcT9LDiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQJTZuMUhPQE2WQx94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyIiJzXTe_PsvVdmRJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]