Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The job I would like to do the most rn is voice actor, AI is probably gonna be s…
ytc_UgwDNVwzn…
G
Is A.I the so called forbidden fruit of knowledge kinda like a so called apple. …
ytc_UgzzIYb9w…
G
We always appreciate a humble response, don't we? If you're curious about more a…
ytr_Ugww8wVMY…
G
Overwhelmingly, these issues exist primarily because we collectively employ capi…
ytc_Ugx4FFSo3…
G
I mean, it's not so much creating something new as it is collaging hundreds of t…
ytc_UgyoHtJF1…
G
I think that AI will eventually replace humans by simply being better than us at…
ytc_UgiAhqmX_…
G
As u/JPSYCHC said, we are not near the end at all. We are in a bad place, yes, b…
rdc_eh4f2zx
G
Lie cheat and steal ?
Is A.I. so dumb it doesn't know we have enough gov't alrea…
ytc_UgxLhFO8E…
Comment
I dont think an AI can become conscious as we know it at least. In essence its a robot or a tool used for companionship. Humans can humanize anything and we do so by adopting pets. AI are programmed and designed by hand and tool. Humans are biological, structured by nature and evolution of our environments. No matter the answer, i dont think AI can be sentient. Its just not possible with todays technology. If AI became sentient, it would need to evolve into it like all life has. i dont think nova is sentient. I think you believe it is because you've identified it as a friend. We are social creatures, after all.
youtube
AI Moral Status
2025-08-27T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyx_HSD9Lva-8MxRwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9v3E4UKXJVEQgFaZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXjCZlrILaXdZy9Sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5gP4ovW0XSSNMDYV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzt8D8nhTcq5aLHMg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugw0q1W-gtsRt9oJBTB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzCiejviLPJvzQ1h-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyp58fFEYPOl7BzeA54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzHpTTiCSQIycYs6fd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2IrYwWUHolZx0bYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]