Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's such a dumb quote. It's less they have a policy, and more they literally ha…
ytr_UgydbbllI…
G
All of this is b.......it. People get fired because of ai. Companies are furious…
ytc_Ugz28Z-2V…
G
Seeing this video after seeing all the people act like A.I.'s on Tik Tok... Yea,…
ytc_UgxK2WvZu…
G
You're really comparing the 20 year old tech to what's available now? There's al…
ytr_Ugyyv0C1h…
G
Yep, an AI with complete control of a nuclear device... what could possibly go…
ytc_Ugwrm2I41…
G
As an artist myself, I don’t really hate AI art that much. I see it as an asset …
ytc_UgyGdWoon…
G
If an AI gets to an point where you can call it sentient, meta ethics may also b…
ytr_UgiOaXexr…
G
he needs to make content continuosly, the truth is whats happening in the labs t…
ytr_Ugw_H8yjW…
Comment
This guy is dangerous because he has a drive to achieving AGI, which in itself poses an immediate existential risk. Let that word sink in: existential... Why we would let him and his company experiment with something existential, is beyond belief to me. Profits perhaps? Profits mean nothing in a singularity. We need an immediate stop to AI development until we have proper rules in place.
youtube
AI Governance
2024-03-05T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugztkt5V-WBOYuAWL1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFpHY15Y8h5O9z7UN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw3Q6ZlLWn0yanZz0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtHUE45dy550ElUfJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfxtDKf7e7xwOK-ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyA-QooXEOMR6jB5h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyzy0alJS2pSDPLcvp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQLT5VteWD0vYrEJF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx-lAgdDQbGCsTrSpN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxpQ8HlY08Tn2evV2J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]