Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no guaranteed safeguard for your future in the face of AGI. Even the CEO of whichever company builds the first AGI is only as safe as their model is aligned. Even with contained AGI, prior accumulated wealth and land is only going to help you while law and order is maintained. With mass employment and people starving, the only way to maintain order is force. An AI agent or human emperor will need to maintain order for some time, until humans can be replaced for energy/food production, at which point every human lives at the whim of the agent/emperor. Being a plumber might buy you a decade or 2, but I'm not sure it's a better strategy than just trying to stack cash and land with the most lucrative possible job today. For anyone under 22 or without a college degree, the trades might be the best option available.
youtube AI Governance 2025-07-07T23:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy1dLCQwAyY6rzqbvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7CfOxR7W-6ST5icZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzO0fYK7MCQ8y99omZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw2orxjO7BQcUxUWwF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwwJcaZYDbiOOfGdCV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx2y2jLkh41wxDoJst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCDwG3GFXb81kbFK54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyujVen9qjyvWDnvfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwTpLMn_Sai3uUksSh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzpWAScfRmTBhmuHO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]