Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5 days because they wanted to make money without making sure that it was ready t…
ytc_UgzFmlDn2…
G
If I can replace your mother in a day with AI would you? Just for progress…
ytc_UgyshW-sF…
G
There’s no such thing as “AI art”. Whoever calls AI visual data translations as …
ytc_Ugyjyre6W…
G
So basically 'Stop worrying about future harm, real harm is happening right now.…
ytc_UgyKlkwpQ…
G
If women start doing deepfake gay videos of them, will it help them understand t…
ytc_UgzJ0nYq_…
G
Meanwhile AI
Ok understood, point taken, noted down.
And next time you add the s…
ytc_UgwVmkWMA…
G
Ai is literally just a Tool for untalented and/or unmotivated people to try and …
ytc_Ugy3WZ0gx…
G
AI and robotics are meant to replace people. That’s why they won’t fund healthca…
ytc_UgwVSBRr0…
Comment
"Fix the halluciantions" is a much more complicated problem than it sounds?
LLMs can not fix the hallucinations, they're a core consequence of how they work. To do so requires not a new model of LLM, but a fundamentally different sort of AI. Which means going back to research, and taking likely several years - possibly a decade or two - to build a novel technlogy and bring it up to something as usable as LLMs are now.
During that time, they continue to make no money, so the bubble still pops even if the hallucination problem can be resolved. They (as an industry) need to generate trillions of dollars of new revenue to cover how much money they've already lit on fire with this, and to cover the operating costs (GPU, power, etc), to be able to make a profit, and we're already seeing signs that investment money is straining. I do not think there is enough Venture Capital money to keep the bubble inflated for the time it would take to make an actually reliable (or profitable) AI, from where we are currently.
youtube
AI Responsibility
2025-12-27T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrzhONLX7bdSVozrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNftwT3j4ZloL1kfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzWAQlrfZTtwfbMiNh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqMdu43iGd7bTnHz14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5RDZP3F2dN5T6Kep4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgytPavtmpqgKCLQFxZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEtcylN1hWxdsbo6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7hLzfTOMVipvHQRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy03q5kfV8LIOdJ4854AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-a0LkDttDfZ_ZIQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]