Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Assuming that in the near future AI machines won't take over all the jobs that keep us alive (working at water facilities, working with electrical infrastructure, farming, etc), government will need to step in with programs that can retool people's skills and potentially provide cost of living allowance while the retooling is happening by a specific deadline. In the other case where AI machines will take over most all the jobs that keep humans alive, we will need to implement UBI to where people will have food, roof over their heads, etc, to be mostly comfortable, but not too comfortable to become lazy. For example, UBI covering a studio apartment, but that's it and people needing to work to get larger apartments/condos/houses. AGI/ASI WILL happen since humans are proof that general intelligence exists in nature. It's hard to tell when. Current tech will not become AGI/ASI, but it will likely help humans build it. AGI/ASI does not have to be doom and gloom. It will be a necessity especially to reach out into the cosmos to answer questions about the universe and find the answer to who else is maybe living out there in the deep expanse of space. I'm doubtful AGI/ASI machines will not be monolithic where they all share the same mission similar to many films in popular media. AGI/ASI will be capable of having emotions as well. Emotions are not a uniquely human thing. Animals share it with us. And again, emotions exist in nature, so they can be replicated. Emotions can both be good and bad, but AGI/ASI machines will display a variety of things (some maybe with only specific emotions, some with none, some with an entire range of emotions and maybe even emotions that humans are unfamiliar with). In any case, I agree with the direction Bernie is going on this.
youtube AI Jobs 2025-10-09T17:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwGYnCrqeoPNigv-Op4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"caution"}, {"id":"ytc_Ugy8Y_jnIA9IaVEumrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzz9p0QcCxeJpB197d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqwqfeGyvDEbvp0DJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzcqexver0GB2GOr2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4TZZF5TlkmodmIKt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwKEhbtc6ZqjUofD0p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxwa_uma6LcUzoPrFl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzfioLYP7m0fDpoNGx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxzk4xUB-s1FjhGxCR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]