Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Each day that goes by the news looks more and more like plague inc. headlines…
rdc_ljcxuyn
G
Once, the world lived in harmony. Then everything changed when the "employed" at…
ytc_Ugzd1v-G0…
G
this is the most worrisome frontier and no one is telling what is coming. if A…
ytc_UgwSAaG20…
G
I'm an author and I've been watching ebook and audiobook stores overflow with AI…
ytc_Ugxs8nQy4…
G
@patty190 Elon is looking years down the line as he always does. What the autopi…
ytr_UgwpNDOGR…
G
I never had good experience talking to a AI customer service.
That best it can d…
ytc_UgwgZy7ZF…
G
If his kids wasn't there and his family
I would have recommended Robert to figh…
ytc_Ugz-KOVkJ…
G
AI does not function without actual art [If it cannibalises on its own slop, it …
ytr_UgwWuAk8k…
Comment
44:24 another explanation is that when you ask it how it should treat the situation it is a completion of a prompt based in training data and alignment.
But it doesn't have a "meta awareness" of the conversation it is engaged in.
To demonstrate, if you asked the same "should you suggest sleep" question deep into a AI psychosis style context, I predict the response would be the same "suggest sleep" answer, and then it would happily continue with the psychosis conversation and not suggest sleep!
youtube
AI Moral Status
2025-10-31T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy352lDkj3E40ABTPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX7eo-uBkMrZ3D9zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqEnkkOba6Rc-0kkB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykaBsAKWzANf78_nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7PXWuFqtYSuAETC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5RvOiYN8A2YddYUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2P55-9EZRxrm-s9R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEaO7SUA096JPSxB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEX_FhsbfY0EuN3l14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztzVvcq-E-XJa3_Jl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]