Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can see this in LLMs if you ask it about very niche knowledge that it would need to understand in order to give you a correct answer. The answer looks like a real sentence and makes sense gramatically...But it does not understand and therefore will give you false statements. It is just trained to generate a certain output from certain input. Simply put: An AI momorizes but it doesnt understand.
youtube AI Moral Status 2025-08-15T10:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgztncoyzcihJwE8IBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyg5x4CjbVOhpl5iQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyymolfuVpVLJnt7ZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzARTizWlm14XrDiYl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzzkNF9CX3o_Rg4Yop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyFvfKG1gRpPpOMCFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-4Hb7BLKu-2fURcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugznh2EE5wFoqDfRxYN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwgOun1Tu5KBN7EG3N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOpP8qqvWToRG_RNt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"} ]