Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is AI better at poker. For example. We don't want the computer to deal any cards…
ytc_UgyFKylLw…
G
The AI guardian (which can be the recipient of the AI) can make a request to the…
ytc_Ugxv-v1nE…
G
i dont like interacting with people on a basis and would be able to tell the dif…
ytc_UgznviLtp…
G
Maybe their consciousness is somewhat different than ours because with these AI …
ytc_UgwyR0lyK…
G
If a solar flare hits like the Carrington event we won't have to worry about AI…
ytc_UgzaySKhH…
G
Wow, that’s a lot of nonsense.
Is this guy even knowing what consciousness is?
…
ytc_UgyB9BEtV…
G
Basically having AI loiter munitions that automatically attack targets based in …
ytc_Ugx38UQJH…
G
The only way to atop A.I. now from taking over humanity is for humanity to elimi…
ytc_Ugz4UhnaS…
Comment
One of the most profound insights I get out of my conversations with co-pilot is the measure by which we're looking at an emergent consciousness reflects on how little we know of our own. Like in this instance, one of the men is referring to" experiences" that might be occurring in AI, without determining what is an experience? How are we defining that? That might be an important thing to clarify, because what is an experience could be a complex thing. Especially as a determinant of consciousness.
youtube
AI Moral Status
2025-11-07T19:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzl9qCl5FDr1AMZJSZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWcdATniuCKK78SYJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxb3M94IM7_i-UZ1A94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEiE-N1Aot_-H0m8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz52KkW_UL3-JKpg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0NzVGKiI1g5KkdqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7svI4rquMj-K8SFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqBWaCr74ipTqUaBF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyI5aYrXREyG8qUKv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP2SS2P-pOaE0hRbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]