Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"The paper fails to explain how the AI agents make such leaps in intelligence" ... isn't this a well known thing that once AI hits AGI it can instantly start self improving itself and as it improves and gets more efficient, faster etc it also gets better which helps it further improve etc etc
youtube AI Governance 2025-11-24T03:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyE25rJzLROpYb-NVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOwjSECN_G7v7gIVh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgytErOUUXu4LCpMiZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSTItTopZ0wyCcXSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlxQS8Du2SJZnjwLp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_CLS_XJW1InWmuEB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJELtoMWMxnMnJVBZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzknuKSwlIe6A0mH_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwOrY_NBBjoQKCi49p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxbiIbIZ-vYHoatVdV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]