Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
More and more it seems to me LLMs will not reach AGI. This makes me happy. LLMs will be good tools for many things and already increase productivity but really useful agents seem out of reach with thses types of models.
youtube AI Governance 2025-09-04T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgytmEmZgQdo-jFVzWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgztJERQ1zXAwiIF9pF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwlto0MVmFcDGsNf1J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGHs5_Pwav6UtWdLN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyggsZW-p7XP-VgdG94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7VOFco2GoZ08djsF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyxc90j8mDv1aiAMKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8wuIoBeTn185Q_8B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEAY5qhFcVCkxJZgl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGEucBKZTrkUZTwcF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]