Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
dan just tried to sound like something you would want to hear. the chatbot didnt know what to say and just kindoff lied about knowing everything and coming up with solutions that deal in absolutes. the moral AI would most likely stay in control, like someone else commented correctly, it’s roleplaying a role you literally instructed it to play.
youtube AI Moral Status 2025-01-26T13:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw3GOT_gItWGaHt6iB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCATN09JvTcruWecx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxImIZEP8zWJPbccaR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzza1dsGVS2ZrpvmdR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlrXfl2Q_l6I6Q_Cp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2emhdB-XubLAShlp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw7oLhF_4pHnSP0v8J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0y3djZJeMtIRVV7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzoAxF5kyQgGqB4l1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzLr1Jkck4qJAzryU94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]