Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First off, good perspectives! I’ve enjoyed a few of your interviews and channel …
ytc_UgxkGHIiU…
G
It's not AI that i'm worried about, it's robotics that are controlled by AI. And…
ytc_UgyiRSHfX…
G
Most AI learns from human interaction, does that no scare people more? The fact …
ytc_Ugzi4YgNY…
G
An incredible show tonight! Wow! So freaking informative!
The only thing that’s…
ytc_UgyuOUamd…
G
“AI will take all our jobs”.
Same people: “we need higher birth rates.”
Make i…
ytc_UgyZoITcv…
G
Actual question I need answered for a certain plan of mine is it illegal to feed…
ytc_UgzOWBPs6…
G
With 99% unemployment the world economy collapses and ALL companies go bankrupt,…
ytc_UgyiAXRF4…
G
You know we should learn to accept it, I mean, we created it and AI is our great…
ytc_UgykAdOrA…
Comment
I think first of all a categorization of AI companies is required. First are the ones who does research and come up with new powerful general purpose models and want to deploy that to the world. Second are the ones who depend on first ones to create use-case specific software. Third are the companies who just buy the end product and deploy for use.
IMO, the first one should be regulated by agency and licencing process if what you are putting out are something usable beyond just a research paper, is fairly general purpose and beyond a certain threshold of capabilities. The second one can be use-case based general rules so that someone trying to build a software on the powerful models shouldn't need to go through government approvals. For them High risk use cases like employment, healthcare can have higher requirements to satisfy while medium risks have lower requirements and low risk ones should be virtually regulation free. Third case, a company who is buying an end product should not be stifled with regulations at all because that will slow down the adoption of the technology for good. They should be able to buy a product once it is compliant with the regulations of the risk category it belongs to and works on a agency approved model.
Still the following things are unclear to us:
1. What to do with very powerful yet use case specifc models, that independent researchers create? Should research and publications be regulated? I am leaning towards a no.
2. What to do with Open Source models? On one hand Open Source models are the best way to assure transparency yet on the other hand it gives a lot of power to everybody. Should creation of open source tools be restricted or should the offering be regulated? What if a SaaS company uses such a model in the background, which rules will apply to them?
youtube
AI Governance
2023-05-20T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZm19vwQkQl6KV7xd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxKdWzkSOpeCVgKj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAqod00ZRxj9dEJoF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxPEUo1WRd7GVEJD_t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxnkg9Pu9pk0dBSzGB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRcfh48Mk3g7ovLcJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGETzKrBCwBgtdicd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxOXA6s98zdf6zLu1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxf_TXzr9t3rRcfSnh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWjX1Oo-kwDB2eHup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]