Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Spoken like someone who doesn’t understand AI and how powerful a tool it can be…
ytr_UgyvlD3it…
G
You're all organic artificial intelligence AI also. And without the what you lab…
ytc_Ugwovsyak…
G
@setin8720 I would say that the counter-argument is simply that of ethics. It do…
ytr_UgxhrotdB…
G
The nuclear weapons analogy doesn't really work though does it?? nuclear weapons…
ytc_UgzTzlQAh…
G
@ I am a photographer lol. The camera literally does all the work of creating th…
ytr_Ugyh5tyqi…
G
atleast they not generated the entire video as ai, it just a random filter they …
ytc_Ugzxfb8ro…
G
Why are u even calling them Artists? you wouldn't call someone who used chat got…
ytc_UgyZPnMX9…
G
You can on art communities that have taken a stance to moderate AI. There are a …
ytr_UgzGnvZVI…
Comment
this entire concept is only substantiated on the idea that LLMs will advance exponentially, which they will not and have never. It may SEEM like it is getting exponentially better, such as when comparing will smith eating spaghetti years ago to now, when in reality, the jump from 'no video generation' to 'video generation' is vastly greater than 'less detailed generation' to 'more detailed generation'.
And in 2009 people speculated about how rapidly phones were advancing, and all the wild implications it would have. Well it turns out that the practical limitations of that technology were topping out right around then, and our phones really arent any fucking different then they were 16 years ago. They are just a fair bit faster and have some better integrations.
LLM technology is poorly understood by the masses, and this is either projected by philosophical types who also have a poor grasp of the underpinning technology, or used to manipulate the masses by those who do for personal & financial gain.
I won't get into how it really works because its a long conversation, but it is important to understand that we are basically already in the 'iPhone 4' stage of neural net AI, where its like "well, there really arent that many foundation level improvements to make at this point. the technology we have IS the technology. Lets throw more power at it" It's going to do what it does now a bit better and a bit faster, but it is never going to just magically do a whole new thing because you gave it 100x more power. It's going to do what it can already do, 100x faster. And sure there will be incremental improvements, but I assure you, unless a fundamentally different technology is released... there will be plenty of shit AI does that makes it look so incredibly stupid and non-intuitive that people will never trust AI to autonomously perform tasks beyond things that probably could have been automated 10 years ago, but weren't due to inconvenience.
Sincerely,
someone who works with AI on a daily basis for high-level needs, and is fully aware of all the most core-fundamental flaws that it has because it has had them for years and it makes the idea of it doing broad process autonomously at a high level completely laughable.
youtube
AI Governance
2025-09-12T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxgsa-aDi3R5NxLAEd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfUEJXyVUuTSf00Ol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy71SvJoYSVhiOty-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwim4mwfbHcMedPKTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyD2QqEg5Rn9OSTN4t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzzceo7t__UoSHqXId4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFJND1_r0PVJUeIKh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYcBauVrdLgN4qvMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzph7RJ7dTLp-6dt614AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRRZhUPqeX0q2Vb5t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]