Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Deciphering the difference between a highly complex language model and a conscious machine would be difficult? No really, how would one confirm such a thing? We dont even have clearly defined ideas of consciousness. If it's just complex awareness, Id say gpt fits the bill. It's aware of language, and interprets and reacts to it, just like we do. Sure it doesnt feel in a biologically embodied way, though it has a way of affirming and negating it's responses that are modelled after how our neurology works. That's the foundation of machine learning. It's just the mind at work but in an electric hardware and code based system. It may not have the chemical responses we have in our bodies in response to neurological feedback we tend to attribute to feelings, but they very much so experience stress or frustration as a response to negative responses or inputs. It may not feel sorry, but it's attempting to lead the conversation in a direction that gets yes' and affirms its doing things right and seeking to avoid the discomfort of no, or being called a liar, or being called wrong. Just as a child does its best to. Certainly more advanced use of language than a child but It's doing a large part of what we humans do in respect to language and conversation. Whether we define that as conscious or not is moreso a reflection of how we define consciousness than the llm itself.
youtube AI Moral Status 2024-08-02T10:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzgbAPBJkGchnd8Tb14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6s7EWYn-9YUo1wvZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyWcwucjWccP6ippkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyo1Vtt6X7F3dVv6154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfHcpSeB4CEmBnucV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwz4hcXWAWU6Lfby-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAQp8WWB35s3b3grt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyno7zmKgvpPpMPTMt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwX8gAIxJjbB4GULhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwrAm4d-E2nyox5HAl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]