Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
aw man i remember this one ai bro whose argument against me was "i could ask ai …
ytc_UgxJ99W4p…
G
Algorithm: I RECOMMEND THIS TO YOU
Me: Nah I don't wanna, it will be depressing.…
ytc_Ugz1vD1TD…
G
AI art is much easier if you want to mess around with and it's good for people w…
ytc_UgwGE81E9…
G
This definitely puts the different points of view in a stark light. I think dr T…
ytc_UgwFNcr-i…
G
REFS PART 2:
[1:11:40] Discussion of axiomatic truth theory in context of mathe…
ytr_UgyFaHsdf…
G
@nukon7630 Very nicely put. Both Diffusion(ie Midjourney) and Transformer (ie Ch…
ytr_Ugz0zZIrU…
G
But...without AI...none of these pieces would have existed...from one AI piece h…
ytc_Ugw4wE_XV…
G
I work in sales and marketing. The AI you are worried about is not the AI that's…
ytc_Ugx7KZ5Hz…
Comment
ChatGPT isn’t actually conscious because of how it was programmed. It’s not programmed to have feelings or be bias in anything, instead, it’s specifically programmed to use these words. It’s basically hard coded in ChatGPT to use certain words to make the conversation more meaningful. A conscious AI however would use the same words on its own accord, without it being specifically programmed in.
youtube
AI Moral Status
2024-09-14T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweRDgbprku7jG0f9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKyTfLcI1KSRuZBbd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyII0dzH5PU6-vQ_lB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfJwS2iBY6-m5EE454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdVGaT7zaS6ZXYGat4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1_iGio_zC0yX876R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMXteBUTw4eKtKGat4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxGZvaHSq078X0Lg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9e_IiMSYl_qcANdB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc2_C3bsxQJPmu5rB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]