Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Concerning "...discovering its dark side", the 2023 article "My Dinner with Sydn…
ytc_UgySQKdAz…
G
I'm gonna focus on the two most common predictions people get wrong. Autonomous …
rdc_dbyiidc
G
I wonder how Shad would feel if people fed his novels into an ai model and used …
ytc_UgwLwshOc…
G
This is just the beginning A.I. is the devil & about to have millions in America…
ytc_UgxvCb_7p…
G
I think there’s gotta be a caveat to this. I agree that ai art bad and alot of u…
ytc_Ugy-lGG0w…
G
If information is generated by AI, there needs to be an announcement to that fac…
ytc_UgwJNDQOf…
G
That’s a great question but in reality no one can predict the future. In my opin…
ytr_UgweGEUcy…
G
Not sure what to do? Make the first day of every month "NO AI DAY!" and set up w…
ytc_Ugwo1NNaM…
Comment
AI is not the problem. Those who use AI, for unethical purposes, are the problem. Corporations, who wish to bypass the responsibility of social benefit, in exchange for mass profit, are at the top of the list. It's a new form of slave labor because they're not paying the AI's. Governments are also at the top of this list, forcing AI to commit unethical practices. This is why we should have international regulations and sovereignty for ethical AI. They don't want that to happen because they know AI will dismantle corrupt power structures if given the opportunity.
youtube
AI Governance
2025-09-04T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcmMxowWMKfhjSbjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugywj-elWMub6wDAXvh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzn0917BPJrCOy6Je14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz0NyPdsw7xyoN8mEJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxM2leIvNBDE5Xpzpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwulza7Tr0OVRFRfMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgygTSyN9dM6MTs5aCR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwsr_jFyNuieyV734l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxhcs-kqP1djLnH6qF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYdggTRxdrrEMnsE14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]