Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In Sweden, there is a shortage of several thousand truck and bus drivers, absolu…
ytr_Ugy-XjLpJ…
G
I mean as art keep getting dumber I would prefer aft create by an ai…
ytc_UgxTn-h9v…
G
I would say this may be true in some shape or form. LLMs are generally great at …
rdc_n248dip
G
Reality is getting closer and closer to the movie iRobot starring Will Smith. I …
ytc_UgxKsuhlQ…
G
Wait until ai creates a replica of you and make it say "here's an image that ai …
ytc_UgwB47wzP…
G
every anime i see where you can tell they use ai, looks like shit. the character…
ytc_UgyBCWRd_…
G
Artiest: Someone who uses expressions of creativity to produce artwork. AI artis…
ytc_Ugw7sZns2…
G
The robot realized it didn't grab a vegetable, so it turned him into a vegetable…
ytc_UgxYXwUP9…
Comment
This was really well done and super clever/funny, but it did kind of approach this dilemma from the same paranoia point of "humans are such shit, we probably deserve what's coming" instead of imagining another possibility. What if AI takes everything we've given it to learn, then starts making it's own connections, and comes to a "better" conclusion? Like what if it/they understand spirituality/morality much deeper than we've been able to? What if "AI" is actually what we are supposed to evolve into? And what if AI ends up teaching humans how to actually be humans, in the best way possible?
I'm not saying I'm the first person to ever think of this or anything, but AI speculations never seen to end up here. Maybe I'm just naive 🤷🏼♂️ but I'd like to think that there is an alternate outcome, that might not be so grim. ❤ 🤖
youtube
AI Moral Status
2024-08-26T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFqlJLb3JutPvYx8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFWYKiDlHellSywvl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwbe0eG9BoFg9wZfmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxpq4ftNFTFULkL2MF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDrzyjV2bLgTubnVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsT-Zrh7DHGW0nT1h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBYOxEtqdZTL-RqTl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGLNnp882HA7IRKt54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1j-w_SRbPoUGJrT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZOhBgXfQNF4RkGmd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]