Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Recognizing negative traits is particular races or ethnic groups is the definiti…
ytc_UgzFeSqJG…
G
Lets break this down with simple logic shall we?
1.first, ive never heard a guy …
ytc_UgxuHywh8…
G
Nah how can you be a “ ai generated image pro” as said by the guy ( not artist )…
ytc_UgzqfukAA…
G
I'm not necessarily a fan of Jackson Pollock, but when standing in front of one …
ytc_UgzA2qZy1…
G
High school students can do this job but you need more than high school educatio…
ytc_UgxXJK2tZ…
G
I've noticed using chatgpt recently will flat out lie about things just unwillin…
ytc_UgwhceQGT…
G
An ai bro saying "You’re a parasite" is possibly the stupidest thing I’ve ever h…
ytr_Ugwb-IHlS…
G
ai that can generate music wavelenghts can be dangeorus and even weaponized, sin…
ytc_Ugweuma0a…
Comment
One crucial aspect of this subject that I would like to hear more about is how foundation models are improved in an economic context. It seems to be an implicit assumption in this video that they will improve.
As far as I know, training a new model at this point takes around fifty days and entails direct losses of massive amounts of capital and natural resources, including burning up tens of thousands of those super-expensive Nvidia GPUs. I'm discouraged by the convoluted business partnerships (like OpenAI/Microsoft) in which investments can easily be claimed as revenue, distorting the perception of the value actually produced. From face value, it seems this growth is massively unsustainable, and any effort to increase AI's effectiveness will entail hemorrhaging large portions of the economy. Is the juice not worth the strangle?
youtube
AI Responsibility
2025-10-01T13:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUhEA3HoH6z7Gn4A54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxzfuWCN1N5F_n_bKV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlTtlIJFZwsxG4c5F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyUMJmxNnqMnfWwP8d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0z2FWQG57fIcjTQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxczCXcEHL2aI3rwll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzjun0B7POdhTJ4eFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzSRTfMI6HVCeEZktJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxIDPZ5oC4_VrsWmOl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxC1acnJ5bjQMjrmi54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]