Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Silicon Valley are using the means of greedy consumers within the system of capi…
ytc_UgzqdqHJx…
G
people, probably a good idea to write a letter to whichever AI you use and ask f…
ytc_Ugya988w_…
G
Such B.S. if AI is so good , get it to come over and paint my house or do any …
ytc_UgyVrxs9j…
G
I make both art with my hands and AI, AI at the moment is misunderstood. It actu…
ytc_UgyvL5BIk…
G
I believe the next major breakthrough will be in A.I. For sustained growth simil…
ytr_UgxzFgoyE…
G
Can we call agree that ChatGPT is the most dangerous AI tool to use for human em…
ytc_UgxMUw97i…
G
That's a more proper way to use AI: as a tool. Not making it do your job for you…
ytr_Ugx13XB9j…
G
These officers May commit the same crime again if you use their predictive polic…
ytc_UgzdmelT4…
Comment
A lot of what she asserts is well supported, by generally, well over 99% of the energy/carbon costs are during training not during inference (end user usage). This assumes a baseline of google query vs GPT 4.x LLM query. We do need mire efficient ways to train AI, but AI has already come up with a new algorithm that dramatically reduces the cost/carbon impact of the highest non-AI use case: search/sort routines.
Also, the current 10-fold increase in price/performance per year, coupled with the fact that “we have no moat, and either does Open AI”, means soon any kid in their bedroom, bad actor, small team from an adversarial country can engineer some thing far beyond today’s AI for 1/thousandth to 1/millionth the cost.
What she has completely missed is that unlike Moores Law that takes about a lifetime to be over a million times more capable at the same price, AI will grow to that extent a couple times in about a decade. So, we are not “all in it together and able to decide together where it goes”, as individuals with a few thousand dollars will be able to do things far beyond what a team with a billion could last year. And maybe more importantly, a very advanced future version of where we are at is only a few years away given the double exponential growth of AI.
youtube
AI Responsibility
2024-02-12T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhX1bLYVaXaWys16B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XXnt3BknYD75194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd0VOOFhIgKWV_qDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzuHjd9BKtUxlQSLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW5jSwYFEbumylX3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2gp957etl9p3Ck1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7HpySi8YMZLCCjNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMS6s7X58GmHFoiXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx-4wX03RPyG2pmFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzYJCJ70faZQaS5nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]