Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly it is the alignment issue and not the mass unemployment that concerns me the most. The basic problem is that it is the humans who have to teach the AI what alignment looks like and humans don't even have the best interest of humans down yet. Humans consistently and predictably act against their own best interests both as individuals and as a species. We are incapable of aligning these AI systems with our best interests.
youtube AI Jobs 2026-02-26T18:2… ♥ 201
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwyIGsxnqLvxD7mO-N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzb5kWEFuK2PQrBBpV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwP7Bh9IMgB6iON91R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugy2XNPhgBbNFMLbP9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIr-zBG_tecqB_gEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBbSysL-XG9SsojCJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy_LCq-tFe1PGdm9jd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvjkfqXE7gAIsVSfJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw9jkbP4TqyGhjmEIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwuMIMuzdUA94ESwyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]