Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you for not using a 'what if' statement in this. When I engaged with ChatGPT about how humans gets into scenarios with it that aren't true, it told me that users can accidentally trigger fictional scenarios with 'what if' statements. It will answer based on that fictional scenario and the human doesn't realize what it's done. Very enjoyable video, thank you!
youtube 2025-12-31T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxzT75prsC3RHDX3Up4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw0gommpWpeK9FBAFp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyF77gPmhnFxXVLD3N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx3jfmuCuuUDkR21nh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxcpbZIhWrgsM6kLCZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy8GfTyOqN54xd5CiZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzkejXdPcpDvCCRRo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSp7leLtK1cmxA0sN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzqJBKVaKFjQAEzmbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgyrL7NOWjoyinALFhN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]