Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We've known this outcome for a long time. This is why quite a few tech people, including myself, think we need UBI to smooth the transition. We want AI to do a certain thing and AI is not 100% reliable. We're building it, teaching it, refining it. The more it evolves the more power it needs, the more space, the more specific info so it doesn't completely hallucinate everything. People in tech want AI for a lot of things because coding and configuration these days is tedious and complex. We're trying to use AI to get more boilerplate code done to match the right integration together and write technical documentation that references has all the correct version info in it and doesn't picks up some of the little things we don't think about. If you imagine some nerd to build their dream video game, all that we have ML and AI doing starts to kind of make sense. Anything less complicated than what we're trying to do vitually and digitally gets assisted, maybe overtaken, by AI. The idea isn't to make all 21st century jobs obsolete and eveyone just go brain dead on social media. The goal, for at least some of these nerds, is to be an advanced, space fairing people and/or achieve some sci-fi feat. There will be humans who are less likely to get addicted and to lose interest in some end goal (like to go to the moon and solving huge problems) and there will be those that are eaten up by an AI VR world. Evolution. But as to the issue with the present. As someone who works with AI, as tool for my job, I have too much more I could say on the topic and how our smoothed brained politicians are too far behind. AI will replace them before they have the sense to put in any meaningful guardrails.
youtube 2024-11-08T16:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy3h1TsFdAMrvBf2kl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwEgZxHpQMvoPM9YU94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxo_n0_BvtP9J8kmUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJi9cX5Hp39fYwB_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTlbzpMsFP1SQpPQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwamQgNuDXxJULOeax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyv_IbSpuZ0GfA1d9F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwplMuteW-rp_kC_jB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLOqUydsyYbTEaBRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJn3kel_9wJjHMCwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]