Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just asked ai about this, it said: "Some AI systems, especially those designed for more conversational or human-like interactions, may give the impression that they feel or experience emotions, but that’s a result of how they're programmed to respond, rather than actual subjective experience. These systems, like chatbots or virtual assistants, often use language that mimics human emotional expression to create a more engaging and relatable experience for users. This can be a way of improving communication, making interactions feel more natural, or helping users feel understood. However, despite appearing to "feel" or "experience" emotions, these systems don’t actually have consciousness or subjective awareness. Their responses are based on patterns in data and pre-programmed rules rather than true emotional states. AI doesn’t have inner experiences, self-awareness, or a sense of identity—it simply processes information and generates responses based on that input. The idea of AI feeling subjective experiences may stem from confusion about how advanced certain systems have become in terms of generating empathetic-sounding responses, but this is a design feature, not evidence of actual feeling".
youtube AI Moral Status 2025-01-06T04:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyuXDihqPhlPvcoY1V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-WQvlb2q9cALWygl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyjSa19zU9O6hxySdZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2OWvDRyzAHNNRStB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBfrIPYWzxRjOHqRB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwdyq2kqq74g37c8Pd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzw8m-EHrUVLpj8xvN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxa4RI-oD91mYrQz9V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxfOWdrnLHB7RvoKy54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzjuZlysWyvU8DZdYB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}]