Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Or just don’t use them and search google, can’t tell you the amount of inaccurate or just straight up wrong information I’ve gotten from AI models presented in a way that it looks like it should be correct but it just isn’t(this has mainly been for programming related issues I’ve had but has extended to other subject areas just not as frequently)
youtube AI Moral Status 2025-04-03T12:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwms6G1OYKvkEP1UjN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy4kf9rQDyAy4H0CZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz0dswY9rsfrVjPlZl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyspfS_LxMFaHfCswN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRyooICXugSrbZLzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxAzTb_JAqsdmKQR3t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzYpXStPmhdEZ18QQJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz26Hjbuo90k20yPrR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxuVMeqrzVkAAVNybJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgybDL6YpIK6-LFDjJN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]