Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think there's effectively zero chance of making AI in such a way that we can understand what is happening ''below the surface'' to cause it to say/do something. I'm assuming this because we really don't understand what is happening ''below the surface'' to cause humans to say/do something. We pretend that isn't the case because we act like a stream of consciousness is determining what words we say and what actions we take, but that is a useful fiction. The conscious reasoning is determined by some process not directly accessible to us so that we can identify patterns and feel that our thoughts are rationally chosen. If we're hoping to have an AI that would have its behavior truly determined by something like a stream of consciousness then it wouldn't actually be human-like style of thinking - just a style emulating what humans pretend to be like.
youtube AI Moral Status 2025-10-31T03:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwUXNN0BH9UGFe3AIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzRmfkOp6bO0nb9UXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi3RCOeht4txJNWBB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzM2FPyCXlq3ddCGYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxD4DvwO2UxlJlS6114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6Pt_A9K6iBockeqF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOjPJrQfssCdRpZDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQk-TwitKTFePsIm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugypezwk4B0M5UuE24V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyty4n1d7bq8r3t-k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"} ]