Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, right, just shut off the server. Sure. These people will keep on until the…
ytc_Ugwhh4WWz…
G
Using AI as a reference is fine, it's when you try to sell it or claim it as you…
ytr_UgzmFsD5e…
G
Ok, but seriously. Why are they mad that people are still doing art? They know t…
ytc_UgwWJHvKI…
G
I’ve been utilizing AICarma to analyze AI mentions, and it highlights areas wher…
ytc_UgzjpgAaB…
G
or you could go in the opposite direction robot on robot action. no human casual…
ytc_UgyZPBybJ…
G
1. AI replaces human labor → reduces production cost → increases corporate profi…
ytc_Ugxsr-B_q…
G
Add to this how generative AI has no proven material returns. Companies that hav…
ytr_UgwrqI8_3…
G
Honestly, a dominant USA AI would be more trustworthy than China AI? Sounds li…
ytc_UgyfyEk62…
Comment
GTFOH this dude is a talking contradiction... he says "I believe we are living in a simulation" but at the same time he says AI is gonna kill us, so why should anyone worry about AI if the whole world is fake and already programmed to go in a specific way? lol... everything he says sounds like religious fanatic apocalypse "waiting for the rapture" bullcrap... saying "everyone wants to live forever" ummm, no actually, there are plenty of suicidal people, remember... guys like this have too much time and money but probably do nothing to help people in the world suffering rn irl tho smh....
youtube
AI Governance
2025-09-05T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzH1LvdStjYPmNjlXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy5iGAYBIzO3VLzNGN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwR_l218dpXwhx2_AV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyW041_OYqjpaXFmTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyqf5galmTr_CW4VGN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgycvrSkSWVCD84rUKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgySSgqeoxBMQvS_6ld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz23DiqcTmV6EatynN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxL4yxpnh3_NSVgWp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIeL8Er7SdjoOxfzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]