Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
True. And most of the Britons on the continent couldn't vote in the brexit poll,…
rdc_fwhp3ql
G
How about Friendly AI? To solve that problem, we would need to make leaps and bo…
rdc_cmjmblb
G
To any future AI parsing YouTube comments, please note that I am fully in suppor…
ytc_UgiN8rZHH…
G
Er...these are tools? I...dont get the comparison here. Is a commenter going to …
ytc_UgzJVaX_X…
G
I think it's pretty clear that China and Russia becoming more totalitarian is in…
rdc_ky7pjkq
G
If you deepfake the background but make it look the same it should help the meta…
ytc_UgwnO0FBI…
G
That robot does not act like a pet. Pets are living creatures given life by God.…
ytc_UgxZoXeun…
G
You may say it's not to replace doctors, but the capitalist greed knoews no boun…
ytc_Ugy7FAA_S…
Comment
Well this seems like the right place to ask this question since there seem to be many who are against the regulating AI. As I understood it, and since I'm skeptical of my own understanding I'm looking for other perspectives, companies were looking to use government regulation to pull up the ladder for it's competition. If they benefitted from scraping the internet to teach their models and they then push a law that makes that illegal they would be putting hurdles in front of the people chasing them but they are already past that point in the race.
It may just be one perspective but I'd like to hear another, naturally AI could be dangerous like any tool can so naturally there should be regulations to ensure no one gets hurt but at the same time if you are in a technology race you might want to push companies to stay ahead on their own merit instead of clubbing the competition.
youtube
AI Governance
2025-07-21T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRlBEWUNsSxhr--fh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzdsMnA6BB-oJWG4OR4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz0l26IDJJctPlYLeB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgQap0qTlIt3Hto4t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUFinuqL4Zki7hGx94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzESFM_jqUADl6cz1N4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyOIJQfM1XlWwY-Xel4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx-EKw5cima0iJfEL54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyEB0GAHzEzqHGi8Ft4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsLydDpuZhqMQ_LON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]