Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is being fed so much misinformation that it couldn’t tell it apart from accurate information, which means its outputs can become unreliable, reinforcing biases and spreading inaccuracies instead of improving decisions. -- AI
youtube AI Governance 2025-11-23T20:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyE25rJzLROpYb-NVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOwjSECN_G7v7gIVh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgytErOUUXu4LCpMiZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSTItTopZ0wyCcXSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlxQS8Du2SJZnjwLp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_CLS_XJW1InWmuEB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJELtoMWMxnMnJVBZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzknuKSwlIe6A0mH_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwOrY_NBBjoQKCi49p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxbiIbIZ-vYHoatVdV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]