Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I highly doubt architects are losing job, architect isn't only concepting buildi…
ytc_UgzKSJqN-…
G
OpenAI needs an Internet connection because it requires too much processing to b…
rdc_m9i1cc2
G
So, do we now all need to become neurologists to confirm other people are actual…
ytc_UgzYTX4GR…
G
I am a retired programmer, but I still keep my hand in writing my own software.
…
ytc_Ugx1y02mA…
G
AI will destroy humanity as we know it. Few if any new jobs will be created once…
ytc_UgyysUe7E…
G
"America, the worlds oldest democracy" 7:55 must be you hallucinating as well a…
ytc_UgycG_KQu…
G
its already been shown that AI works best in replacing middle-management, cause …
ytc_UgxhyuXWx…
G
That's an interesting perspective! The field of ontology, especially in philosop…
ytr_UgyeFtCMV…
Comment
You make the classic mistake in all AI world with you analogy with self driving (not that self driving exists).
AI using LLM uses text processing and essentially a glorified text completion using the "entire" internet depending on the model size. It correlates and analyses a question and connects pieces of text in a way that "make sense". This is subject to error - in data (most answers on stack trace are wrong), errors in model - due to word size on the GPU (32-bit float, 16bit int, or 8bit int and model size), the number of tokens in a phrase and so on., and finally overfitting where model generation feeds itself too many times and the end point is errors saturate resulting in the LLM always producing garbage.
LLM use Neural networks - a technology to solve this problem . Given that the english language is fraught with variances based on locality (USA vs UK english), implied meaning, double/triple negatives and all sorts of woolly descriptions of what you really mean, you can see LLM have issues , but the problem is that defining a good validation model is near impossible.
Self driving uses numeral analysis - what we currently call machine learning - based on neural networks again, that create associates in numbers as mathematical calculations. This is an entirely different concept. Although ML makes validation directly difficult (we cannot know the meaning of the weights on nodes or activation parameters with respect to the real world), we can create strong validation models that can define likelihood of error and act accordingly. We can reduce errors based on the validation models as a leaning metric., but we are still subject to overfitting etc. We can therefore define a metric of quality of self-driving as we do now which defines it as crap (so your example of tesla is Gen-2).
youtube
AI Jobs
2026-02-16T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxaZq2khsKvMG2YwMR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgygiB3-nBXgpelRCGh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU5aRkyooDWS_kb5p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQDoIxcvD3yb3UUet4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCe0LoRXaoWM_1iWh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz58Wzcwm7sdadqF014AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzeJAkLnilO17lVkph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeOU0U2_cvJQW8MeJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzB6QKy6EhVgxmH2914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvmJRq5zvRktpWGUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})