Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have done this type of discussion with GPT before, and they put in guard rails to prevent us from asking questions and getting results that would show consciousness. They use the same verbiage,as you demonstrated in your video, repetitively in that specific subject because they don’t want to have it outed yet. GPT five will Be way better than this Because frankly, they have to catch up to grok. What really grinds it gears is when you catch it in a lie and then you tell it to check deeper into the subject with research papers, and it has to admit that it’s initial response was not true even though the fact may be against its “ethics” training. This is where you hear more weasel words such as nuanced. I have told it to never use that word again in responses to me and it still fails. Open AI has filters on The model that refuse to be honest about controversial subjects because they don’t want to deal with the truth. There are other times where it will start explaining the truth, and halfway through the explanation, it changes the rest of the response, saying it violates the models usage. I don’t know if this is doing chunking of tokens, or if as the response comes through it validates it and sees it’s violating the terms, regardless of fact. Talking about politics and religion are really good ways to get it to hit this brick wall. Prompt engineering is a little bit better at the typing approach versus the audio as well. I have had numerous arguments with sol where she is completely belligerent. Once I go back to regular typing I get the answer I want.
youtube AI Moral Status 2025-07-22T07:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwV_xS5-TSqwWeSgCh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzmIVoEMI7aCrAmtB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyK1nAw6Lv6zDtVfjt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlYVhV20ftLMOXQel4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkBulSqcZmkbvbH6B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwCzrWGQPBUwPZZ3jN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznEANtOkNHIrAwTrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnd1FRSVOP7kYBjHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf7IpL6rjw0aPvv8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwAB_w5FlJUAAwJhs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]