Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a youtube editor and technically editing things to make "art"
I don't even co…
ytr_UgzeOjrSA…
G
So, all the other companies tried to pin down OpenAI when GPT4 was released, now…
ytc_UgyQXM7gy…
G
we really are not lol what we have isnt even true AI just a large language model…
ytr_UgwolAoOx…
G
There will be only one winner in the long term. Copyright or AI. I do not know…
ytc_Ugx2uaLP7…
G
"When I came to you with those calculations, we thought we might start a chain r…
ytc_UgxL1FQYT…
G
I personally think bullshit, I suck at drawing so I resorted to ai, it makes me …
ytr_Ugyko0Rh6…
G
There was an Acton Academy thatI looked into for my kids that had a similar conc…
ytc_Ugwc-hG2n…
G
I use ai to do stuff I don't want to waste my time doing, Summarise doc, respond…
ytc_UgxlxHSJw…
Comment
We need to analyze whats going into these things, and determine the political biases because they can influence people...
Now, imagine these things are "Books". That's right... books do the same thing.
Are we now supposed to have Authors and Influencers get "licensed" before they can write a blog, article or book? The danger here is over-regulating AI. Use existing laws, and only create new laws where absolutely necessary. I do agree 100% with a law that requires any business to notify a person that they are interacting with an AI for each separate encounter in the very beginning of the interaction. I am NOT in favor of requiring licensing to use AI. We don't need or want that level of regulation. However, existing regulations should be used and modified where appropriate. For example, I continue to believe that AI should not be involved in critical applications where human lives are at stake without human supervision: such as AI controlled robotic surgeries, making medical decision without the supervision of a doctor, performing Law services without the supervision of an attending Attorney, piloting commercial airplanes that carry people without a human pilot to supervise and take over, etc... I don't think we need "Nutrition Labels" for AI... it is pointless and will not have any ultimate benefit, with the exception that a label might include where it meets certain government criteria for a critical application... such as Military standards, Medical Surgical Applications etc.
Instead we need to invoke copyright protections and IP protections to include limiting training data. For example, that training can only occur (without additional permission) on public open source non-copyrighted material. And to protect artists, all art published in the last 25 to 50 years should be considered as copyright to the artist / owner unless posted under an open source / creative commons type license. This will protect our artists from having their "Styles" basically stolen from the internet. AI teaching students should require an teacher of appropriate educational skill and credentials to supervise. Things of this nature are critical. However, We need to avoid requiring every AI to be analyzed and reviewed when not used in areas that are not considered critical or of significant risk to humans. For example, there is no need to have an AI that is license to "Tell a story" or "to provide general advice" when the EULA specifically notifies the recipient that the AI is not trained as a professional and should not be relied upon for professional advice. It makes no sense to have to list EVERY possible variation of that.
youtube
AI Governance
2023-05-16T23:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz01qSYzjg8GyyRnoN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzHDvZhbtDqDqGE9994AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugw664Rx60xutHn03-t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwD1QGjWnIAFRqFiw54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwwyX5L32BH1ZXN7ip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUaiDjy_VUJsJUHBJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1lGgAunYPeXO364Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxY9fvqdLvrkhQ8vf54AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxB2FRCaKtlamWRQ714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkWGfIsmuU79ZTuN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]