Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video largely overestimates the current competencies of modern "AI" systems. First Chat gpt doesn't understand a single thing. It makes text based upon the tokens it is assigned within its training data. It has no understanding of the text it puts out, and cannot abstractly think. I have noticed this in my own personal use when I use it to write a story, but after a few messages it can lose track of the plot. After a dozen messages it completely loses track of it and I have to correct it many times on a singular plot detail. It has no replaced Paralegals. There is a famous case where a lawyer tried to use AI to help him study for a case, and Chat gpt made up citations and entire sources for it. He received a 5,000 dollar fine for it; The rate which it does this is also insanely high with some estimates saying Chat gpt hallucinates 27 percent of the time with nearly half of its statements containing factual errors. LLMs also are not good as the math's. I have seen a study where the latest openai 01 preview model can only get the times table up to 9x9 right half of the time. It might even hallucinate MORE than the current 4o model, and costs about a dozen times more to use. The hallucination problem is a massive road block for applying these 'AI" models in the real world solely on their own. Until it is solved I am very skeptical of the claim that LLMs will take our jobs. This also ignores the amount of jobs that these things can't even replace such as consular or therapist. Attempts to do that have ended horribly for those who relied on these AI systems. This also ignores the insane amount of energy these things need to operate, and the current and up in coming legal lawsuits will will make their training data smaller and smaller. AGI is such a nonsensical term as no one actually agrees on what it means, and I do not think you should trust predictions of future technology when the current technologies clearly are unable to support such a vision in their current capability. If we went off future predictions we'd have flying cars by now and hover boards yet we do not. I would suggest checking out Emily bender on the subject of AI as she does a good job at deflating the hype around it.
youtube 2024-11-09T00:2… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy3h1TsFdAMrvBf2kl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwEgZxHpQMvoPM9YU94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxo_n0_BvtP9J8kmUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJi9cX5Hp39fYwB_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTlbzpMsFP1SQpPQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwamQgNuDXxJULOeax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyv_IbSpuZ0GfA1d9F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwplMuteW-rp_kC_jB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLOqUydsyYbTEaBRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJn3kel_9wJjHMCwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]