Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it possible for LLMs to attach a 'probability of error' in % terms to its answers?
youtube AI Responsibility 2026-01-28T14:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz5OYjEMNqQHEn25dt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_jknxsnVFuIZoWlR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDqOcwOWz6M3z8a2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3yqB1VE5ms1Vfma14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-VKmnNTBWdFnuEZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwfglktfqi8rsdGFeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxW8UZ7Kjj70TP2L154AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzsto_r_ree0VQ3ePh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz5Nif5hFFAfFUHEB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyst7xnKnjPJ8kRs3V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]