Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reading that headline about Palentir I translated it on the fly into non-politic…
ytc_Ugxm0jYj4…
G
Let's talk about facial recognition algorithm. There is some facial recognition …
ytc_Ugh_fD31y…
G
This cop 100% thinks his stupid ai girlfriend loves him or something. This is th…
ytc_Ugy40L1kU…
G
See why is all this even being called Ai? Shouldn’t it be called Vi (Virtual int…
ytc_UgxKNUJ1L…
G
Lol you know how many people misidentify suspects? Don't cry about facial recogn…
ytc_UgwgBSjzT…
G
I know i just started this video but they do in fact "bomb brown countires" . Ev…
ytc_Ugyrgf6bh…
G
Huh ai will dominate the human race. Funny joke yeah 😂😅. Everyone one saw that v…
ytc_UgyzZQ2M7…
G
How a man with not enough natural intelligence and believe to the climat change …
ytc_UgzlNsyUB…
Comment
The thing with that particular "hallucination" "how many strawberries in R" is it's almost like it's compensating for the user mis-speaking "how many R's are in strawberry?" Almost as if it thinks the person is dyslexic or something.
While I question whether AI's can truely become intelligent (by our own determination of intelligent). . .
I have thought for some time that there needs to be a kind of over-mind or another routine of some kind that examines possible answers and also either another thought center that can bring differing ideas together to combine them, or build it into the over-mind somehow. Sorry, I lack the vocabulary to explain my idea.
When you feed an AI a bunch of books, by Albert Einstein for example, it teaches the AI some things, but it does NOT teach the AI how come by those things itself. In other words, it doesn't make the AI an Albert Einstein. Which is in itself an interesting case in point as Albert explained how he formed his thought process.
The trick is how do you make a machine that thinks like an Albert Einstein and not just spew out knowledge it was fed. AI's are amazing, but you must remember (at least in their current incarnation) that they are basically over-glorified databases. (possibly understated.. =) )
I'm not saying their bad, I'm just pointing out...
youtube
2025-11-07T21:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPENHm8nttBYyog8x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVMlPN66H4ujPYBwx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0bVcznkDOms2rEzF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxofFY44puJdtb-ixl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzanZnHhANC7GhGbRV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBId35OZPaLaOAmV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7qzFp2MGrAS1AcpV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwNPYZC7pSTwHWMnUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxa4KLoV8bx0nh-iGx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTQZzqUx7F2rXrskd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]