Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even if that happens, the unlikely case of an LLM pulling out exactly one distinct case and present it as output to a user —how would this user know its real and not hallucinated?
youtube AI Moral Status 2025-11-17T07:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy4pJaFlC1ST8TsMQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1lo-NF10pxHul2dN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz47XLns_ytmoymSed4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5odvMNsN8ygFDmw94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyqDKNNPSF9Xnxh-Bt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgEb94RHyjBbB2VSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgwJxYo1jPc91AgP_tR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzAYWLju6LxUTrLcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzdabuyv90U6Mkkov94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwdQmVNb-enV5pT2yp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]