Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah sure reaches AGI in a few years. Meanwhile you have hallucinations, Compute Efficiency Frontier, digital tar pits, and not to mention no more new content for LLMs to learn from. Since they don't extrapolate, you will have a brick wall of learning which a LINEAR model cannot overcome
youtube AI Governance 2025-12-19T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzkgVUMA5PV4p8jo_54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6QsfUPnHlgW-8Aip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzW8ueJF30KezWIjrN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmVD3gBakM688p0Ah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAcm8aLvBgOWzKx9x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyb9oiJTJddFH1H3ih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxEGSDIaDwTN7o7_494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzR1qvEQ6ZjZrlS-St4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyAo4gWps-XD0ngUrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxY5cjXxYrvTlD4ec54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]