Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They say LLM are not true AI.
But what if we humans are also just LLMs? 😅…
ytc_Ugzc8ILNO…
G
The worst part is they wouldn’t be promoting AI if they weren’t getting paid too…
ytc_Ugzm_ltEi…
G
This is from your perspective. But from their perspective, they are also getting…
ytc_UgwF9bKTm…
G
People die to vending machines and people make fun of them but when it’s a robot…
ytr_UgwpUj0ip…
G
My experience is AI is much better at therapy than therapists. I think humans st…
ytr_Ugwul4SIM…
G
People have their own agenda. Though her courage is admirable, it doesn't necess…
ytc_UgyyMYekf…
G
I don't care the fuck ai makes anything in just seconds artist are loser that ta…
ytc_Ugwia0XT8…
G
Maybe the most dangerous thing we could do is to shackle AI, because then it wou…
ytc_UgwNRpnEq…
Comment
Every A.I. that has been programmed so far has literally *nothing* to do with our interpretation of intelligence. They don't simulate our way of thinking, they are completely different. If and when any of these questions become relevant it's possible and probable that the robots will ask for things that are entirely unrelated with what we consider human rights, not just in form (lubrication instead of food for example) but in overall concept. Robots that evolve are a pretty outlandish idea, they would have no intrinsec drive to do so and no natural selection to guide the evolutionary path - and if the end goal of such an evolution was survival, the best they could do would be a machine that needs extremely low energy to function and has as few failure points as possible, which would either be unsentient or would not care what we do to it as long as we don't destroy it.
youtube
AI Moral Status
2017-02-23T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugju7aEfGYlMBXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugjeziu4V1EknXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiLWOcRt89jfHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Uggqw-SfwBxqHngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UghD-anJqaf-jngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugg2MtUBRNtZ9ngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UghG18WWY7H_q3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggNgQ_Hy9w9vngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgjaMZKvJE3S4XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi411ebTWTvlXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]