Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In theory, one would have no issue with this, if we were living in a Star Trek-like world. As long as there is a true universal basic income and the means to produce resources/energy independently, etc. But we don't live in such a world, and people that would benefit from UBI fight against it. Even if we made robotics and AI illegal in America, US companies would still be competing with automation overseas. The problem with technology like this is that it's a true Pandora's Box; we can't go backwards, and the technology will absolutely happen. Solutions to this aren't necessarily to try to force people to live in a backwards facing state, but in the short term, there are no other options. Until we reach a point where true UBI is accepted and feasible, we have to chain billionaires from ruining the planet.
youtube AI Jobs 2025-10-08T16:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9k0bcMPvsAIgnwuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxAJpalnUcADfjZgLd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOAcHqaht_vjaaEBB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyPJIQPRr7mx0o9DYp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzo-UX0eSuV3RWZQvl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuMnzYunU49CiH9e94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwwiLrUlK0nezpO7dd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm4ViWohVcY9JEbYl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzmvxKgfpTOCpOBIzh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwua7i1r0_VHXJ847R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]