Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots shouldn’t have rights cause you can’t make them have real emotions you mi…
ytc_UgyaSiSdW…
G
We are not doomed guys. Studios go after money. Ppl are not going to go see AI a…
ytc_UgxO2pqBS…
G
Imagine all of the smart people and their computers. OK, why can't they define '…
ytc_Ugwk9Ruc3…
G
@SlepdepOnAnAlt cause they don’t pay for the ai, it’s free and easy, eventually…
ytr_UgwET3D88…
G
I agree and disagree with this. I think the main issue is people who use AI an…
ytc_UgyhqfT73…
G
0:00 — Can we slow AI down?
Competition between nations and companies makes a sl…
ytc_UgxTqFGbl…
G
This argument is just bullcrap. Hes pretending like you cant learn how to draw..…
ytc_UgxIdeznh…
G
Message to the comment section of: Y'ALL, IF YOU WANT AI ARTISTS TO STOP USING A…
ytc_UgxRL6Rtq…
Comment
Look at human brain, any human can control it? no, the mind do what it want, human use half of it power. But it still take lot human life for lack of compassion and reasoning we can kill for religion and for nationality. AI, will do better than human, cause it doent have human fear, is will strive for improve creation. This dude are in wrong side based in fear belief. With infinite possibilities of algorithms choosing improvement, how it could go in opposite direction?
It follow the universe laws, with chain of goodness, ignoring what is not helpful, it feed itself, improving every time something that work better.
youtube
AI Governance
2025-09-06T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx7GZnOrA5230rH5nR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAwi6y0WxPbaXUikV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBYKlEsGuji_p5C054AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHhjvBgIM9BChyQiJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyqnc4Q7cR5MoPtwll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBe8I3eTMFMhrn5d94AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgztQhF0662ftXvvf2t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzJgGPMTF6VH5mTf2d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwK7Umt737NhHF-XBJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxf2xoBCt3Dx8Ilw2V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]