Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
sora ai It's a bot that steals other people's videos and makes the video more re…
ytc_UgxXC4d5v…
G
@pyerack I agree with your statement and I think this is a people having too hig…
ytr_UgzZtm3rO…
G
Are they taking into account the cost of lawsuits and having to comply with what…
rdc_n4egkbd
G
I'm currently pursuing a degree in data science. Professionals who poise themsel…
ytc_Ugwm6C5mh…
G
We all need to go back to school to upgrade our skills to become more capable to…
ytc_UgwI6ehWy…
G
Unfortunately, they don’t care and will continue to roll out their plans to have…
ytc_UgwS-mw3P…
G
Just so y’all know it’s not the ai’s fault it’s just the programmers
Wait this …
ytc_UgyQagDnH…
G
I think it's scarier to be young than old at this time of the A.I.…
ytc_Ugz4voCu4…
Comment
Can I put this out there in a really stupid way. Granted materials, costs, etc are impossible for most but in this case the same is true with ai. A person at least in America can’t just use his billions and build himself a nuclear bomb or power plant. So why would this not be the same? Just make it illegal to do? Granted other countries will do what they want. But if the world agreed as they did with stopping the creation of nuclear weapons (minus Iran) then seeing the harm in this or the forewarnings, why can’t we do the same, agreed upon countries stopping this? You’ll still need to data centers, the power, the chips, hardware etc. you can’t make it if you don’t allow mass production of x components. If you stop the sale of bullets a gun is useless. Granted as humans we seem to be reactionary vs proactive, but in this case if enough scientists, scholars, insiders warn the world of the potential dangers, wouldn’t you think they’d stop. Problem is it took the tsar bomb before the world realized we were headed for complete human annihilation. I think with this, there’s no immediate physical harm and when there is it’ll be too late. Easy to see the tsar and think damn this is gonna fuck us up if we don’t stop. Very different than looking at tech or tech companies and thinking we’re all gonna go extinct. Last point, are we over reacting. One thing they thought at a point was electricity was unpredictable and its use would kill all humanity. Apocalyptic end of humanity? Well that didn’t happen? Or maybe it is? Just took the creation of electricity years to get to this point now ? AI couldn’t work without electricity. Ai is the electricity of the 19th and 20th century just its advancement can happen at a greater and more exponential scale. I’m lost in this rant now. But I love these interviews and I’m certain this really needs to come to an end but fear it only will with human extinction
youtube
AI Governance
2025-10-18T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKcAoGbu4sOZ-dlvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOAZGJegqseuddevF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoJEJ1q1ZimarmuHt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmautTrToqYsjwtx14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyUV_RR_WO8pr0Em7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw2m3DurSZl_Dx63B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwcqsZDViOLwNkAZwl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznffdO8-EONKw2N8l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8FdO2DPiXWVfzkvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyT5zRkCrVu-av6djF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]