Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
another comment in terms of deciphering truth from lies. since LLM's are probabilistic in nature. if all probabilities of sentences put together point to saying one thing - the earth is round. the model will say the earth is round, hence a truth. if all humans bellive 1 thing even tho not knowing we are wrong. the model will speak that truth and we will regard the model as all knowing
youtube AI Moral Status 2025-11-01T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzV2V0G7yDp1WKgOBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8iSTyp9NyAVDi8ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTM3_p9Zs990HgOTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzhHjhvvE2BcdADgod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy9T-w0Clu46j3hNnB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxTGxgEQnozxdvv0Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzxn4xso09goJWUAI54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwptZLUKuh6knkJaW54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw928RQgF47WVOLCOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwgcqhLlRnka9l9Ia14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]