Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Halting problem proves that no algorithm can fully understand all algorithms. Therefore one of two things must be true: Either there is an algorithm that humans cannot understand (not yet proven to be the case), or the human mind is not algorithmic and therefore cannot be duplicated by an algorithm.
youtube AI Jobs 2026-03-06T13:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxnypCJyhJp7h22cKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVoiW_PSj5ks79daR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6cHvLfmtASvGMHaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLQcsG5GdZaBHBudx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnOPgjDV6aTOF-luB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzyYqr7zcFl5lETrq94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwcwvYgXrnF-0C3f3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg8K9tDw-CzeLuE2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwYNeMmoljc26INoC14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxPi6sgKt3LgQ23E554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"]}