Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Suspect most of those hypothetical 'agent misalignment' situations purported by AI companies never happened - and serve only to hype up the alleged capabilities of LLMS; any user of LLM AI agents will know they fail at even rudimentary activities, like filling out a simple web form, to say nothing of the hallucinations. See people like Ed Zitron, Yann LeCun, Gary Marcus et al for a dose of reality on the lackluster capabilities, false hope - not least hemorrhaging financial losses - of LLM models
youtube AI Governance 2025-08-27T04:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy2sFO9gMXP5iBlB-h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4otCoxcyQOrckVSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw29efcp4iGac6pNZF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyOOYnhUQbsniVAfOl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiArPGzLjLsyh6b9R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]