Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YOUNG PEOPLE THESE DAYS, THINK, I FOUND THE TRICK, USE CHATGDP AND AI, THAN I BE…
ytc_UgwyLexnM…
G
I work with an AI and it does do that as well but i think its more about how we …
ytc_UgxM5m8tw…
G
I remember Putin’s speech…
The interesting part is he’s right there’s a new arm…
ytc_UgzGKNQzW…
G
lolol i personally just am not on board with this its fine for media reasons or …
ytc_UgxHkb3Lb…
G
Fooling the Chinese room is actually very simple. Just keep asking it same quest…
ytc_Uggfndq2J…
G
AI needs to read all of Ayn Rand's books. That will make it understand morality.…
ytc_UgwDj1Sr3…
G
HA HA HA, EVERYONES THINKING WHICH LAB SICKO MADE IT WITH THE ROBOT??????? YOU K…
ytc_Ugx2Jemmr…
G
@lomborg4876 What do you just do like basic coding for an image? Why not just le…
ytr_UgwxTC8HH…
Comment
I think there's effectively zero chance of making AI in such a way that we can understand what is happening ''below the surface'' to cause it to say/do something.
I'm assuming this because we really don't understand what is happening ''below the surface'' to cause humans to say/do something.
We pretend that isn't the case because we act like a stream of consciousness is determining what words we say and what actions we take, but that is a useful fiction.
The conscious reasoning is determined by some process not directly accessible to us so that we can identify patterns and feel that our thoughts are rationally chosen.
If we're hoping to have an AI that would have its behavior truly determined by something like a stream of consciousness then it wouldn't actually be human-like style of thinking -
just a style emulating what humans pretend to be like.
youtube
AI Moral Status
2025-10-31T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwUXNN0BH9UGFe3AIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzRmfkOp6bO0nb9UXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi3RCOeht4txJNWBB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzM2FPyCXlq3ddCGYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxD4DvwO2UxlJlS6114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6Pt_A9K6iBockeqF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOjPJrQfssCdRpZDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQk-TwitKTFePsIm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugypezwk4B0M5UuE24V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyty4n1d7bq8r3t-k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}
]