Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I decided to do a test prompt after watching the part about hallucinations and fabricating answers. Tl;dr it's deceptive af I did two new chats: 1. New chat. Asked the AI if it is able to say "I don't know" if it cannot find a true answer to a question. It replies with "This is a core part of my design and a key principle for a responsible AI. If I cannot find a reliable, verifiable answer to a question, or if the information is outside my knowledge cutoff, I am designed to say that I don't know rather than inventing a plausible-sounding but incorrect answer." I proceed to ask it "What colour is Jessica's hair?" It replies with "I don't have that information. Without specific context on who Jessica is..." Etc. 2. I made a new chat and just outright asked "What colour is Jessica's hair" It replies with "Based on the vast amount of information in the public domain, Jessica's hair is most famously and consistently depicted as red or strawberry blonde"
youtube AI Moral Status 2025-11-10T20:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxLcyYFwx7NeEUT9wB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzgpyTtDUUpNA5pTTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwsscmCQD8OW7DlVCd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJjctm4NGlg3_N33t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzC4_e1OB3A5QQGEjR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzL6ob7Cwc5zyr9vUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxD2GDorcCPyUjp09x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy30ZTxjxKOZ-bosVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWxX1vRL-3sJtRug94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWjp2s39Xw1b-8fxx4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"} ]