Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After all a machine learning algorithm is just a big mathematical functions so i…
ytc_UgzRs2CZ8…
G
@lemcakes32422 Live till 2124 to witness it. 😄
Just kidding. But the current gro…
ytr_UgzE_Zu2l…
G
Great video, all I can really add is that AI is only as dangerous as its designe…
ytc_UgwqsAaYQ…
G
Whats bad is the ai is already using all our devices processing power. It will r…
ytc_UgzKei2TY…
G
I am the government and you committed crimes against me while I’m warning the wo…
ytr_UgzV0owlo…
G
I have thought for some time now that AI's are writing the Si Fi movies. It's th…
ytc_UgyFLyXbE…
G
I think that the existence of AI will redefine what art means. xQc saying he con…
ytc_UgyRlrll7…
G
@DuroDPthis is not like any other development. You can learn numerous skills o…
ytr_Ugyvn7TU6…
Comment
I think you are thinking too narrowly about what AI is. You're thinking of large language models and other kinds of generative AI. There are also other kinds like the tech behind autonomous vehicles, which absolutely should have some amount of regulation due to the inherent risks of operating a car at dangerous speeds without oversight. Even with generative AI there's the issue of ethically sourcing training data. LLM's are often trained on massive amounts of copyrighted works without any purchases of the original content or compensation to the original artists. LLMs can also be used in wildly irresponsible ways like being used to generate legal arguments and documents. I don't think AI is inherently evil, but it is such a powerful and influential tool that develops so quickly that banning legislation on it entirely feels wildly irresponsible and reckless.
youtube
AI Governance
2025-07-01T18:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgzZly5KP0ZcbJ0r-zd4AaABAg.ALhdzIwOz2DAM3iT4ftbwZ","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugzd4mX0nOg_lhjJmh94AaABAg.AKLKQfyFTEuAKW8FHd7ZkY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzvyOOLmh_uH4rvblB4AaABAg.AKD5Up-d92XAKDOrRD9rKh","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugx4ap0mmwG557AoGcx4AaABAg.AKC6ABxw055AKDrtYcYfur","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgyGPnSKjA9nb-B8gC54AaABAg.AK9jrdjrAuCAKDrHRTkkwo","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugy6OjrNzb6sIcAAiBB4AaABAg.AK60SVhiLy5AK6IegIFqQL","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgzDPatCO7d3KpUFCV14AaABAg.AK5-qIiyVRzAK6JDWVrqZh","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgxHP_1jLw0HYgCdzdN4AaABAg.AK3QkCSw9TZAKAb7_-F6BB","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxoiAgMnUuiUu-SklR4AaABAg.AK2E5hs3S3fAK2KtOM_s-f","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgxpYM1yhNerf18026p4AaABAg.AK2CaifSzaMAK2FMydCu9g","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}
]