Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An LLM does not "know the answer"; it knows an approximation of the best next word in a correct answer. Therefore, it can't lie; it can get it wrong.
youtube AI Governance 2026-03-17T06:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxzliC4-bSNUTua6NZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwz4dpRM-fj1nM957F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzIrDzQrvtZkBN7kiN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyTT2NMa4Pz6JKWhYt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxIyTADNv8rzF8V2Nt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzwhsfLlF-OPtGIsEp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxdpt4Jg3yM6hTN7W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxrhDJ4moujbqgqoB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzo033WliRcqhiryCp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwsvg4k9EUjw_12CZl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]