Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The MAN WHO SPOKE THE MOST AGAINST AI ROBOTS IS MAKING THEM......! ELON MUSK IS …
ytc_UgwyEhBlG…
G
Heres how to make AI safe,follow God and dont use it.
Its easy,just like one plu…
ytc_UgyKESMrM…
G
@angxls_real so, why not return to hand made physical 2d? And remove all the dig…
ytr_UgyHzzw_U…
G
Ethics is a perspective, how do you teach AI to be ethical and from what perspec…
ytc_UgyG1pLmY…
G
at what point will this video as a hallucinatory misrepresentation have more vie…
ytc_UgzDZJy_o…
G
They’re going to happen somewhere. There’s many in public offices who won’t pass…
ytc_Ugz_fZaCY…
G
You better learn how to grow your own food, how to can/preserve it, hunt, fish, …
ytc_UgzHTTFxO…
G
Ai is already doing it better than a large percentage of artists. That's a fact.…
ytc_UgylfJfmd…
Comment
history dictates what happens with over-regulation. AI is definitely being developed in the nether regions of the computing world. Cartels have far more money than most governments are willing to throw at regulation, the drug problem is an example. Illegal arms, human trafficking, smuggling, all results of panic-ridden over-regulation. here's the framework to work with : "The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself." *edited spelling errors*
youtube
AI Governance
2023-06-23T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxjYhyDzvZghqJkSiF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwV0oFufG0nHSmGQ_V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyaHjECaeljMs2vTXV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwa5qP_Y0tiKdhcsol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwqnJ-N1EDM9M8LTg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyvIogCaUQubtfd7-Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0-yOu1NH_HFf1p9R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPG_4gRm45Gb1LLY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyZ7Pdmv-OL9ffRiZh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzH3lROqlWuCOVvguB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"})