Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a sense that modernisation is saturated all generations in degrees of A…
ytc_UgyzsEOfZ…
G
I been seein those ai deepfake ads for a while and honestly I thought it was fak…
ytc_Ugwq7_mAB…
G
One of the interesting things is the fact the “AI” solves problems in ways that …
ytc_UgxK5YfFn…
G
Cannot believe that i am hearing this. Had no idea that chatbots even exsisted. …
ytc_UgzAEwGBd…
G
As an AudiHD boy, I'll take my silly notebook doodles over an ai image anytime, …
ytc_UgwkZwbBp…
G
At 1:54, could you please provide clarification? Did the user have Autopilot, En…
ytc_Ugyke8Ls5…
G
And the AI has replaced those news readers in other languages- including the one…
ytc_UgwiEiaY9…
G
We’re not smart enough to figure out how to keep AI safe. But maybe AI can solve…
ytc_UgyQTRHtH…
Comment
Reports are that 95%+ of the current AI data centers that are online around-used. That is, like having a call center of a 1,000 and only 50 employees are actually taking calls.
Coding fails because of the volume of information needed for a decent sized piece of software. Current AI has limits on what they can process at once. The current model can create small bits of a software project but lack the understand of the overall software project.
To reduce hallucinations, some AI installations run the prompt multiple times and come to a group consensus. It’s still not perfect, but it is an existing improvement.
I use AI in my projects. I’ll have the agents create images which I correct in Photoshop. I’ll have it write bits of code to show me alternatives. But I decide what to use from the suggestion. Basically, I look at AI as a tool to assist but I still control the final product.
youtube
AI Responsibility
2025-12-18T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzKEkhKAybo9Yk7oax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPuHOY7-f5DJfLfwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxUL5vXyS4PRl4LLOR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwLU-XplfCKYUEdjw14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgztFsta29eQZtRF_UZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzcldt6g2zsSJ0NXyd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSLgJdSHxByflYcrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy5D5l5PVLcsA_wvLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiGt_CMT2-d_d5W8t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRKIIGTnIgPkEeZ4d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]