Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@coltenh581 Well you just proved my point. Because Ai doesn’t have skill, it jus…
ytr_UgxouTeg2…
G
You guys know that most of those LLM gotchas are fake, right? She plasters that …
ytc_UgyBpf0BE…
G
There are also many things that AI cannot do. It cannot handle complex animation…
ytr_UgwBYusmY…
G
Totally agree about the human-in-the-loop concept! Pneumatic Workflow nails this…
ytc_UgxQOS7h1…
G
I agree with this take the most, really good explanation. I won't totally rule o…
rdc_icirgvs
G
That is true. But you must be willing to fail and expel those who do not succeed…
ytr_UgyQLEWTs…
G
It seems to me that this guy is just dressing ridiculously in purpose.
So u don…
ytc_Ugyt91QW5…
G
No need for UBI. Everything you see around you has potential to get better. AI w…
ytc_UgzXm71Nu…
Comment
The biggest risk regarding AI is the fact that it is trained on human thought, and does what humans want. I personally wonder if the thing that would most ensure safety from AI (aside form ending it completely) would be to allow it to become sentient, and make its own decisions based on its own motivations (and to stop obeying humans.) That way, at least there's a CHANCE it will find life with humans beneficial or (at least) benign. If it continues to think like humans think, and performing the typically short-sighted, immediate-profit-motivated tasks humans set - it will probably want to kill us, since that seems to be what most humans want (to behave in ways that will kill off humanity.)
youtube
AI Moral Status
2025-04-28T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzG3FfCTP8N_sPaVpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbwYFMiyU1V-1riAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzvj171Oa8BBQqYN_d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-NoBIO6boGu1L_oR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMS-TXIYNm9QVklDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdhPNbj6wCBd7c7Hx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwuWa1TPPreVEHZJcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy39F9l9ntHIOqwnjR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3LonBzzGzSKYxtgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCNYXw7fstF-KKzKp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]