Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only benefit non-medical AI brings about is making CEO's, tech bros, and sha…
rdc_n0gq7jd
G
One day military will be doing nothing on the ground and only playing ai video g…
ytc_UgwxXCdvN…
G
I condemn him as much as you do. I am an artist and despise AI as much as you (o…
ytc_Ugx87sDSk…
G
@THERandomGuyManNo it does not lmao. AI only affects COMMERCIAL artists who cre…
ytr_UgzPp7eL-…
G
@user-vw4cx8fc9sname Thank you for commenting! Your observation about the video …
ytr_UgwJJN571…
G
Basically everyone on Earth has been used to train AI with consent and compensat…
ytc_UgzEhdvFv…
G
They should put a clock on the front of each of those robots, just to constantly…
ytc_Ugy0huZuC…
G
Keep in mind the robotaxis are still in beta testing they are not fully develope…
ytc_UgzOAAcQL…
Comment
This video largely overestimates the current competencies of modern "AI" systems. First Chat gpt doesn't understand a single thing. It makes text based upon the tokens it is assigned within its training data. It has no understanding of the text it puts out, and cannot abstractly think. I have noticed this in my own personal use when I use it to write a story, but after a few messages it can lose track of the plot. After a dozen messages it completely loses track of it and I have to correct it many times on a singular plot detail. It has no replaced Paralegals. There is a famous case where a lawyer tried to use AI to help him study for a case, and Chat gpt made up citations and entire sources for it. He received a 5,000 dollar fine for it; The rate which it does this is also insanely high with some estimates saying Chat gpt hallucinates 27 percent of the time with nearly half of its statements containing factual errors. LLMs also are not good as the math's. I have seen a study where the latest openai 01 preview model can only get the times table up to 9x9 right half of the time. It might even hallucinate MORE than the current 4o model, and costs about a dozen times more to use. The hallucination problem is a massive road block for applying these 'AI" models in the real world solely on their own. Until it is solved I am very skeptical of the claim that LLMs will take our jobs. This also ignores the amount of jobs that these things can't even replace such as consular or therapist. Attempts to do that have ended horribly for those who relied on these AI systems. This also ignores the insane amount of energy these things need to operate, and the current and up in coming legal lawsuits will will make their training data smaller and smaller. AGI is such a nonsensical term as no one actually agrees on what it means, and I do not think you should trust predictions of future technology when the current technologies clearly are unable to support such a vision in their current capability. If we went off future predictions we'd have flying cars by now and hover boards yet we do not. I would suggest checking out Emily bender on the subject of AI as she does a good job at deflating the hype around it.
youtube
2024-11-09T00:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy3h1TsFdAMrvBf2kl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEgZxHpQMvoPM9YU94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxo_n0_BvtP9J8kmUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJi9cX5Hp39fYwB_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTlbzpMsFP1SQpPQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwamQgNuDXxJULOeax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyv_IbSpuZ0GfA1d9F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwplMuteW-rp_kC_jB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLOqUydsyYbTEaBRV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJn3kel_9wJjHMCwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]