Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think you forget the challenge of error outputs. A human can have outputs and inputs that are not perfect and course correct along the way. An AI with chain of thought reasoning has no course correction mechanism. It cant detect its own bullshit. There is no "common sense". So, Errors compound.
youtube AI Jobs 2026-02-25T20:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxEsyssuoXpnciSf8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzE5xX2uT2ma5sO-nt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzkn4OZspOx4Tvx2UJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxq-LDdM7uJAUv6FEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxzRWN4ZtwFBa4rPKB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwXCmyCvJOqZgjm99h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwClKOCcNbtEHXWWuh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyVSjZQWerYT0niYLF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxlbhotWz08okDPZfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9Hl4dK7FbDi8exzB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]