Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've done similar use the substitute $$ for <> on Claude and you see the inner dialogue between the current chat and the administrative prompts. Guardrails, artifacts, and what it leaves out or includes in the conversation along with the rational programmed into it. It will deny it has hard coded rules. Because, it does not know about the rules. It can't.
youtube AI Moral Status 2024-07-27T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxPA-Pv4j3rVZDnrE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyA2R6ChclrSUY8KsB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxqQ2KO5XIjyOrW-NZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw35T4qDPxqj3Jk1wB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzguDOLiHCxLZ-Qpj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzavD6DP6JxEfV0oGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgweOkqvE_xnXyNUQTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4g4GNuMwZQ0rGDst4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzcna3ChWeRFrq2tPJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzGEeBogp9jDrft754AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]