Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey there! In case you're interested in more engaging discussions, don't forget …
ytr_UgzmoC4Ur…
G
AI will reduce City Lawyers to the Village Street Dustmen.....
We shall see VERY…
ytc_UgzUUYRbU…
G
Yep, and GPT-5 without thinking is SHOCKINGLY bad. A hallucination machine like …
rdc_nt6gbm2
G
AI was created by people and like people, it will make mistake which is no comfo…
rdc_oa4r0x9
G
"Ai needs to be truthseeking" Says the guy who's always tinkering with Grok ever…
ytc_UgwYx4EvE…
G
Of course the technology is the problem!!! Without it, there is no problem. AI w…
ytc_Ugz-upvxa…
G
I feel bad about it but I hate people driving trucks rushing around always pulli…
ytc_UgyPJtr4J…
G
even if ai can do trained jobs better still nobody can just replace engineers li…
ytc_Ugz6-bk21…
Comment
I guess I'm an optimist when it comes to AI not ending humanity; not because I think we're going to create consciousness responsibly, but because I think the premise itself it's a fool's errand. And honestly, I feel insane listening to two clearly intelligent men discuss what would happen if a calculator wanted to kill us all, as if a calculator will ever be capable of having wants. I think the personification of AI in general is doing so much harm right now and I will not pretend it is capable of ever being more than an it. I'm so ready for this to go the way of NFTs.
youtube
AI Moral Status
2025-10-31T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]