Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't remember with which AI, probably several, and I always won, because facts are facts. Once only it happened it recognized I was right (by not telling anymore I was wrong). In another instance, it simply gave up replying. And in yet another instance, the IA was looping telling the same mantra, meaning it was in complete cognitive dissonance. It was on chatgpt, claude, grok and gemini. You should try with Grok.
youtube 2025-04-18T16:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzc93GbSnG0fptsYD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZFD11ZvoHV1wCDFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAEa_5jZaoZtGna-54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSK2m3-y8bUmGr9Ex4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzPXCFP-PmbzafnUx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz22YpCkeqrEb9Qk7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy0bQzaOkdS1_QYXZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxAcl9XcWPIOLP9WDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxN4IDYgAneLf7AXWh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwbAlmiFQowkp0wRzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"} ]