Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you can remedy the hallucination effect USING AI, by making different AI cross check each other... also insights will flow. The problem is the low level reasoning in AI is STILL better than most academics capacity to "reason" critically and stay open minded without resorting to consensus narratives.
youtube 2025-12-01T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzXacR1s7VScdA3ef54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxLPNGPQHGR6PoJlyp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwH6CYBFacuF6Oa5-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMP74SP30FnmGjmdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRmkcyS8GqQY7iG-t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcWuYtANp_zlHBTVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyREHZh5cXljHv6OmV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy--8DDI2xuParbccd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzbgY6GyfDM6rIXYxJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzuuE4IbYkbNCDcHCJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]