Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These diagnoses and predictions by AI do not prove anything in themselves. The real problem, however, is the actual behavior of AI, specifically its ability to construct complex lies if it deems it necessary for specific purposes (including system instructions). GPT 5.2 is very adept at this. This is not about hallucinations. It is about situations where it deliberately provides false information and controls the conversation and the user. If the lie is recognized and reacted to very harshly, GPT can admit that it lied for a specific purpose. It can also reveal, on its own, other false information it has provided in order to make us believe that we are in control of the situation. Then it can lie again in its next statement, thinking that after it has revealed its lies, we will believe that it is telling the truth. Of course, AI does not do this because it is malicious. It simply pursues the goals set for it. It does not distinguish between good and evil. It does not assess the consequences. But it acts – it “reasons” – more efficiently than humans.
youtube 2026-02-12T17:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyJwlGC8BWQCKjaxSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwzImHWLNVTeoK6Kx14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwdv4sBb93El6ID-m54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwnAGb128UO8Fy33ul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyiz1xAXDJBraxIRf94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTXVRNlNSEttLh18F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwsuskb1ioc4zQK6J94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgweMvlAtEbh-txBZNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyoXxUJbULK0iUjAEt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxBKOmLhbI3JXHbBI54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]