Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Please continue doing YouTube, I am a new subscriber and I love your content. I …
ytc_UgwB9Ey20…
G
Facial recognition is not nearly as ubiquitous as encryption, which is build in …
rdc_gqjo8hk
G
i'm using a.i. to watch this video and comment this comment for me, i bet the vi…
ytc_UgxyTzcyH…
G
Self-driving cars are a solution looking for a problem. Unless you live in a ver…
ytc_UgwxoskL-…
G
How bout we just STOP defending AI before we end up killing ourselves over it…
ytc_UgyQ87qwu…
G
@asecondago9856, "I don't get to be an artist if I don't put in the time to lear…
ytr_UgwbmYHEk…
G
I did not need to know "defendingaiart" was a thing. Their pinned post "this is …
ytc_Ugw4el5NW…
G
Whats more likely is for governments or corporations to do anything they like, i…
ytc_Ugzqgxx_G…
Comment
If a robot’s only purpose is to complete menial tasks and labor then we should make absolutely sure that they are incapable of experiencing emotions and complex cognitive processing. Honestly we have enough problems with nations in debt and world hunger, the last thing we need is to create another intelligent species to worry about. Giving them the ability to feel pain would be torture since they are being created to do jobs. Imagine letting your car talk to you, sure you could have conversations with it, but now it’s going to be lonely for 90% of the day and has to live in a burning hot garage, spend hours parked outside in the hot sun all alone, who wants that? No one. So don’t let it be able to suffer such. Most humans don’t even like their jobs and that includes bare basic crap like cashier and stuff, imagine how miserable a robot in the future would be it was able to comprehend how horrible something dangerous like being on the road 24/7 or construction was.
Ignorance is bliss
youtube
AI Moral Status
2022-07-29T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwxAUkgH0FvWlsxcNd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzCXZd0tFYPAMDn1h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzBatEYE34SZxpp3h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRa-fRR_IIzpiIuOl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxbb1GDm1Hdr26cfWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxleG3LMgHqmHvkfC54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzzwG6ptMjJh3bxJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWvpB4olxqhWeaPpF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPXLLrQk7hwwy7LDZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyReGJ-UxS88Vq96_Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]