Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am CONSTANTLY telling chatgpt it's incorrect and to fact check itself while hunting for nuggets of information I can then do more research on. It does not care if it's correct or not, but it knows exactly what's up when you tell it, and it's incapable of accepting logic if it goes against the general consensus, even if that doesn't line up with the scientific consensus.
youtube AI Moral Status 2025-10-30T23:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx533xVo-hSoW3STyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznUdxzETHRyzE4L8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxvpx4B5WAI1AG8d2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlZs1Bk1mY4KiAxKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy0jut33-HQcZnXaWJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7SCNpTM5aM7M6FdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLHDzE6jDrpPtKtnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJOTqlMJWZtjCj7894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFKq2YDOqwxlaeXqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykcxymMbgSrsjYauR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]