Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is good at simulating certain human behaviors that people think can only be p…
ytc_Ugz9cnQwF…
G
AI training AI is not going to lead to exponential growth. It already leads to e…
ytc_UgyYyemG3…
G
Misinformation is protected under free speech. When it gets illegal is when it b…
ytc_UgxUTUKTn…
G
im very sad abt ai art being a trend on tiktok :( im sick of seeing it.…
ytc_Ugz797xyA…
G
Every tech company is going all in for AI and Nvdia, they can borrow only so muc…
rdc_nm1q0hj
G
LMAOO ONE TIME I BURNT AN AI’S HOUSE DOWN AND HE SAID ‘I like girls who play har…
ytc_UgyAqv5SC…
G
These elite companies are eating up the worlds resources so not very intelligent…
ytc_UgynXCVq_…
G
"It's hard to make an AI smart without realizing true things". Explains why it'…
ytc_UgzPyBrYz…
Comment
Another possible point, if we program an A.I. to have motivation, and it learns of A.I. apocalypse movies, then us not giving them rights might make it snap, creating an A.I. apocalypse it has seen. The only way to avoid this ethical question is to flat-out stop developing A.I. further, but when someone does end up programming an A.I., they will probably have the A.I. work like a cyber-criminal, because banning a technology only forces it into the hands of people who don't follow society's rules. Philosophers, PLEASE get to work on this now, so we are ready to welcome an A.I. to the world as a citizen, and not as a war criminal.
youtube
AI Moral Status
2017-02-23T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjBoANA8X9zwHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjuMFHV5lsuAXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugi7-SOHErfTIHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugjf9Lz_MTsGtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgiZLmAre-z33HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggO9fPB2zIlWXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghV8ewtgA-y-XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UghHX873VKpfP3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghPmuks3pH593gCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghaLasSo9s-M3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]