Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A new paper gives a formal proof that AGI is NOT computable! Meaning neither LLMs nor any other agloritm can achieve AGI. The paper called "On the computability of artificial general intelligence" is on arXiv
youtube 2025-12-11T07:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_LJ03pNgjiU6vlRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMzZsJvJtSMz7TH254AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQA1fP8Bd3l2BgDBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxLFOMJLLgemi1WaXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6W-p7Q4LjkJXD1EB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwoXtUs7_GfHor2z054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUogT43Tyn59QHGWV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgypoiMGTAmzOL08ed54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk-cyt0p5K_7xJ9El4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyBfw8NmWELtt1SSTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]