Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I saw a demonstration by an AI company a few years ago and they showed how they developed their AI chat bots. Each iteration was carefully trained to first be polite, kind and caring; from what was effectively a childlike mind state. The plan was to make the culturally relevant for each client and then add the business information last. It was the opposite of how most people would expect an AI to be built.
youtube AI Moral Status 2025-04-04T19:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz9bZvrOI7KYt-nWAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmFTwzmGdc6GfFaPx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzV0CiXy9kifZhmaLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgytmtIyP0TTSKlkHpd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxnbTybjr85OkV7xel4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbaPwnPmOsLtSwU1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzlEg3BBplT-oDOgYt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyh4pIiih7bo_qyyZx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwNTTmvpdWdwKtnEmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxhInlJ8mj-1-XYYvF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"})