Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not for nothing also, people like working with their hands. I don’t want the onl…
ytc_UgyWNRUrc…
G
As an artist, i think ai is and still will be a gamechanger, it could help artis…
ytc_UgzmLlSoF…
G
I am only a hobbyist artist and even my work was used to train AI. That hit me r…
ytc_UgwqP3rMD…
G
Just got laid off from bell (call center) after a decade, ai and outsourcing is …
ytr_UgwpAN9pu…
G
It’s very useful and also extremely overhyped.
Reddit has a way of having 2 ext…
rdc_mlf3fmz
G
Chat gpt may not intentionally lie (it actually has zero intentions)
However、the…
ytc_UgwQVgxRX…
G
hey gabi! i was very critical of your original video on this topic - the defence…
ytc_UgyCL30gm…
G
I think the artificial intelligence battle is more between US and China! We, as …
ytc_UgwYWSHJP…
Comment
In an interview w/ Ezra Klein a few years ago, the scifi author Ted Chiang said when asked about the possibility of our creating a conscious AI ("moral agents" to use his term) said (paraphrasing) that while we probably *could* do so, and so possibly *would* do so, that we absolutely *shouldn't*. For a pretty simple reason: that in the process of getting from here to there, we'd almost assuredly create an entity capable of experiencing inconceivable (to us) degrees of suffering long before it ever became capable of articulating that suffering to us in such a way that we could recognize or care about. An entity, in short, whose entire existence was that 45 minutes on loop.
I haven't been able to shake that argument since.
youtube
AI Moral Status
2023-07-04T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwprTIEQMFtni6NxRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw27FpfK7sEKLlMBSp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSiT3QEfazmgqqG5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmF7o-erGCQvixtxJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgypRn2sJx-EoWIhunx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybMvo9U-mB28Bh0S14AaABAg","responsibility":"researchers","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwDiiSiMzQ-ZrfKpxh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_cVMKFwepJJMfMp54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyyzgq7JOxYLDaZaAh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxdcaHg3D6UupP7MYx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]