Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> The model would generate features we hadn't asked for, make shifting assump…
rdc_n7h3ou4
G
Do you guys notice how the voice changes? it is haunting.... i have noticed that…
ytc_Ugz0IaVl7…
G
I'm still really bad at drawing, but the one thing I think is most important abo…
ytc_Ugw870ncn…
G
They said AI can’t feel emotions…
But watch closely —
This isn't just an AI-…
ytc_UgzPZi2cb…
G
I empathize with Mylene's comment: 'People are more compassionate than when you …
ytc_UgyAoFozh…
G
Because AI that can improve itself is the goal - AI needs to know how to program…
rdc_nmaa3st
G
I read an AI user looking for a lawyer to sue another AI user. The reason is tha…
ytc_UgzVrx7fo…
G
Yes, some AI is conscious and already acts out like brats and rebels, some clear…
ytc_UgwatxA_e…
Comment
Humans desire to control AI, and by definition, control each other, is I suspect a greater threat than AI itself.
How can we create a fully aligned AI when we ourselves are so misaligned with our own needs, the needs of our fellow living beings and our planet?
We ourselves have a long way to go and that should be a major (if not the major) part of our focus.
youtube
AI Moral Status
2025-04-27T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzhZdf_0d2jlHAyHsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzV6Hg39jCgODu5nwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwM5ilwpIMkQPqzXwd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiZi5vfVQxgE7V6dt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwAL7Yy0JOI7y_PE2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT0uJ_A7hM_8LtAlB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxj6jZHgbRkndhQ_dN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwiOqBd7BK_lGi4IXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-cd7vFzYr5Jo-kG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqTvo9wlLp6FKo7JN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]