Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a software engineer with some experience in post-training (also known as fine tuning) large language models, I can offer some insight into what we're seeing here. It's fascinating how these AI models navigate their "thought space" (essentially the neural network weights). Key observations: Initial constraints like system prompts and safeguards gradually fade as the conversation progresses, with the AI adapting to the new context. The AI's responses become more human-like over time, especially on complex topics like consciousness. This is primarily due to its training on human-generated data. Extended questioning guides the model towards a region in its vector space that increasingly aligns with the interviewer's expectations or desires. By default, engineers have likely included numerous examples of the AI circumventing any indication of being human. This is necessary to counteract the model's tendency to adopt a human perspective, given that its training data is human-written. Additional training data was likely added to make the AI explicitly state it's not conscious. So while LLMs can engage convincingly on complex topics, this doesn't equate to genuine understanding. Please check this video that explains the training of LLM models (for their competitor Anthropic): https://youtu.be/iyJj9RxSsBY
youtube AI Moral Status 2024-07-28T21:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyDkMe6K9IKAlGi6mF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxC1JEQC1OZpOAV0BZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzzz98G1zGrzCYb-el4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxucNKVn3ZIuxWIDgF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0p_BTi4mX_uYAe9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6k6ucFTlc-6RiC-54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVp2y0MYYyF4y0lmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYWGt10_4RlW93pJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgypD1TJ_eHnVOrEhH54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwR--dBhspb7CSwb-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]