Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes!! This is one of the most important things for people to understand about LL…
ytc_UgxhI9svy…
G
Okay, i feel like Dune has been misunderstood, we can have a whole separate 18 p…
ytc_UgyJpSaAG…
G
It's also a little pie in the sky. If you look into William Shockley, one of the…
rdc_oi147sy
G
Then why keep any universities for exact sciences, AI already knows everything s…
ytc_Ugz3Mazuv…
G
It feels like all the comments are coming from a massive bubble of insecure arti…
ytc_Ugymzbuls…
G
Fascinating. Everyone should watch this. I’m not at all a materialist like Hinto…
ytc_UgxFzJofy…
G
Who needs feul from Russian oligarchs when we can stop by from other continents'…
rdc_ibdnk1o
G
Yeah no such thing as an "AI Artist", just potatos that think their creative bec…
ytc_UgxXqYSom…
Comment
The irony is that throughoutly recorded, very formal processes with defined sets and procedures that still aren’t at all mechanical but rely on interpetation of human language and argumentation, is the absolute best usecase for large language models.
And that is how courts operate and record every detail of those operations. In a literal library full of past cases.
While large language model can never beat searching for keywords in searches such at that, it could be better suited for interpreting meaning and providing results based on that.
But the system would have to be designed to do just that: it would be forced to only return actual items from validated dataset. Hallucinations would manifest in irrelevant results, not non-existing ones.
Chatgpt does not do that. The GPT model simply isn’t compatible without completely transforming how the model operates. While GPT like models have been modified to support tool usage and abstractions like layering, this kind of application would be a monumental overhaul.
ChatGPT can not “think” without spewing some text out, which is very inconvenient in many tasks. And as a related side effect, anything it can repeat verbatim is kind of limiting for it to produce relevant output in general. Accurately repeating a citation means it is heavily discouraged to “think” outside that citation.
Lastly the model is simply incapable of predicting what it will output immediately after the current output. If you ask for a lyrics that rhyme, it will choose a word seems likely to rhyme with something, and figures out what that something is only once it gets there.
So you can see why citing a legal argument is a nightmare task for it: GPT can either: cite accurately, or cite in a such manner it is at any given moment likely to be a desired thing to cite in the first place.
It can not know who won when it starts the citation. It has some “knowledge” (learned weighting of tokens) if the current token “looks” like a part of the correct answer, but it can’t skip to the last page and verify that it actually is, even if it would state that just one or two words later.
youtube
AI Responsibility
2023-06-10T18:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzRDPTO1gVHJ2wgoJx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFGa7la3pXMm5JZ2l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnuakR89i9JyjgN_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1AQY15vIHmDjQg_B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMDKWeHCePqkQ7Pz14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIDKUXqRwhSb5lpO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGl4Iycu8ghTRsj8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx35JiGi9Cn1EdlP8d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwrAi_A7oA9zeqakiN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZjZCDoHytdP9ejO94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]