Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Live Ai coaching on games would be an interesting thing. But they have still sev…
ytc_UgwphEZVe…
G
And yet. None of these would have been made if it weren't for the AI piece. So m…
ytc_Ugy_sBUfg…
G
If AI is to become outsized and unmanageable, it will have run every scenario of…
ytc_UgzlI7xQs…
G
Don't worry, let's hope that soon people will realaise the danger of AI to the w…
ytc_UgyVyYtfz…
G
Elon Musk's Stance on Artificial Intelligence: Is he Right?
Artificial Intellig…
ytc_UgwcIlD7T…
G
As an instrument/automtion engineer whole my life,..it is all absurd around AI. …
ytc_Ugw8SkRmU…
G
The irony of this guest talking about echo chambers and then not recognising the…
ytc_UgwZJTjdH…
G
Balancing AI in education is tricky, but Olovka's been helpful by supporting my …
ytc_Ugx4z_dC1…
Comment
The AI is not the problem itself, the creators of the models and those who feed them are, if you have malicious and/or incompetent people creating and training AI models without any constraints and selection principles regarding the incorporated feed that are strictly set by a law for each and every category of purpose of various AI models that will ensure the safety and security of individuals and entire populations of societies in the broadest sense, you will have severe unwanted consequences, the creation of the type of envisaged possibilities, their set-up, interconnectivity, mutual flow, influence and resulting output of the models, and especially the type of their building and training by specific types of feed needs to be always monitored (by specially designated governmental authorities and/or by special departments of corporations, universities and institutes) and regulated by a specific, concrete and precise law or laws for different areas, if needed, this especially applies to AI models of large corporations that are available for usage to the general public. Complex larger AI models with different categories of capabilities and purposes can be also built, but every level and every entrance of the construction structure set-up needs to be strictly detached from other parts without unwanted or any interflow and sourcing-out ever, depending on the integrated type and scale of the model, because the risk of harmful damaging outcomes is even higher in areas such as defense and robotics for example, particularly because of faults of the system or some external malicious entering, building of such models is not recommended, complex multi-layered models should only be built in broad specific specialized experimental research areas, for example: in Medicine (Ethiology - Clinical symptoms of the illness or clinical symptoms of the syndrome - Diagnosis - Therapy), Microbiology, Pharmacology and Genetics. Inadequate scientific and business AI models that are custom made and internally used by universities, institutes and different corporations can cause even greater long-lasting harm and damage. The large scale Era of AI has just begun, so everything is not seen or done, yet, but it is known, that means that lawmakers should follow and observe the creation, enlargement and implementation of AI and draft provisions of a Bill or bills that will be introduced when a certain level of a fast developmental growth and presence is successfully achieved. AI will harm and damage as much as people are reckless, leaving AI unregulated for too long also belongs to the category of recklessness.
youtube
AI Governance
2025-12-29T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxgLSDL_7rt1JYnMjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8hN4RsQKm9csOny54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMCzh42J1unAjOaop4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwGqKqVAES6DII-qdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbHr2Ih_3LFPKQ9mt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyDLawfSUz6D0QU-XJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgydXVHMwJRNuYMs49p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxeVjx5qzrp2fiR_c94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaaXCxIklOuQNFwNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxXMn5RkON6Gw1WiV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]