Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
hey, I initially used to ask chatgpt "Should I take rest today?". But now I jumped up from asking to just saying "I am taking rest" . This might mean I am slowly getting out of using chatgpt unhealthily to a point where I myself can question the bot if it tells me something too good to be true. Recently ChatGPT told me something which felt too good for me, so I questioned it "Really bro? Don't be a yes man."
youtube 2025-10-28T05:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxiPpVelVheSIY1oFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlOWrIUIhTwwQ4ra54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5Zaphh6dm45sPQi54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybHuI2y2o3IzkndE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxdpNaJZXy6Q6Ac08V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTSVnrbjwSXCG152l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzfGWhZM0FjRj1gGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaqpYMyZNSj6LiNJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWQR1qOcRS1nLkngF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynXH3bAyHIaUJoUMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]