Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm an artist, I paint and draw. I've played around on Ai, it's a lot of fun. It…
ytc_Ugw1iniGo…
G
Just watched a video talking about AI has learned to lie all by itself, work aro…
ytc_Ugw1eLD8_…
G
neath your art was too cool and creative to be something people would promot th…
ytc_UgzllXPSV…
G
This is a satanic agenda, designed to steal, kill and destroy. You would be a fo…
ytc_Ugy74opW5…
G
I like how many of you bring your perspective to how fucked up it is to fuck the…
rdc_dv66aow
G
to be fair deviant arts main customer is artist so it HAD to change, you compare…
ytc_Ugyzi-sSt…
G
From the article
The Walt Disney Company last week sent a cease and desist le…
rdc_nhxet6e
G
No!! No !! No robot will ever have human level as you said cause. human are spir…
ytc_UgyFWkZxj…
Comment
I really enjoyed this discussion (though it gives me existential dread about what the future will look like). However, one thing I think was missed during was a reason “why” this is all being pursued. What is the benefit that AI gives us that is so great that it may warrant a 25% chance of the world ending? I know the motivations of the people at the top leading these companies was discussed but I’m wondering about the technology as a whole.
To use an analogy from the video, say someone is building a bridge that has a 25% of collapsing and killing everyone crossing it. There must be something on the other side of that bridge that makes it worth even considering building it.
When I hear there’s a significant chance this goes horribly, I think “what reasoning is there to not throw all of this AI stuff away right now?”
youtube
AI Moral Status
2025-10-30T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxgrK6C2Uao6798G7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrYwQ_ZYtGkegqHtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-_boNT2UHH-KKDep4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtpzWAN0_e8eE9p-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4EJsMOUikWacNTml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxF4bXUctfpg4nSK9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN9kO7i9XbC_VyJI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzaOS5tyiTeC6YSXLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9FH0P2EV96FON3Yx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIkkQde0j9HOJ2gU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})