Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"hallucination" is a reference term for when an ai program doesn't know or have the data to complete a task but uses what it does have to "imagine" what it "thinks" might fit the gap.
youtube AI Harm Incident 2025-08-13T02:2… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwI41YU0VnRJRs2asV4AaABAg.ALglJix9t3FALk9DlFuNoV","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwI41YU0VnRJRs2asV4AaABAg.ALglJix9t3FALkKV4YMbAT","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwI41YU0VnRJRs2asV4AaABAg.ALglJix9t3FALns0JgnW0u","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugymz11Qrw9ZffK0g-J4AaABAg.ALeh_vGSvmhALeiCdxyY8u","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzvQY7aoj7tpgpDA6F4AaABAg.ALdxDuKEmk-AMQHHme7zAY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxMkRvrFqtibCKRJsR4AaABAg.ALWomSbKikgALWpIYrgRoM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwuKG5OyDpCKFQWsxB4AaABAg.AL9oG65w0xGALSH9lBO5tY","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgwuKG5OyDpCKFQWsxB4AaABAg.AL9oG65w0xGAL_j8YlHoYs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxASsktvLEwAjh2H754AaABAg.AL8i-Z65gSiALhAoCTF0N_","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxASsktvLEwAjh2H754AaABAg.AL8i-Z65gSiAMGzk8b4UjV","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]