Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not a lawyer but I will repeat an opinion from one I know that really put into p…
ytc_Ugz4NqBrl…
G
I find it quite pathetic and sad that Google is so behind OpenAI. Imagine how mu…
ytc_UgwShQaQV…
G
“Train how to use AI” is literally so stupid, now I think of it. What do you mea…
ytc_UgxHS_-MC…
G
I don’t get how people don’t understand this. You’re exactly right. It’s a tool …
rdc_mp2udsl
G
Very simple fact. You put your brain in the hands of AI. AI will disrupt all you…
ytc_UgwPTFbMT…
G
0:17 - WAIT ! ✋🏻 HALT ! ! !
You Cannot Qualify Something that is not a …
ytc_Ugw9PGDCF…
G
I use AI for stories and to make videos for fun. Not to make videos or Art and p…
ytc_Ugyt-hqLk…
G
i tried to get the emerson ai to agree not to kill humans. i could not. it sai…
ytc_Ugy0Q1cbF…
Comment
Lets assume most white collar loses their job to AI, and all the AI titans have the best AI, how will the people pay for AI subscriptions with no money in pocket.
Either the high value transactions will happen in the top 5% thats excludes white collar displaced people and or govt debt on behalf of people.
Thoughts,,?
youtube
AI Governance
2025-10-06T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyQp5G5HbRhrz4owJJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9Y7zIhTJWG2fX3zx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGkrjCBgkTEipjKlN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc7JOeLUrxjvlgbjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0CpnH5PUDauvTjYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyrGlHMpeg5UC_7Sl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwqHZ3sVpsgG8AjwF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwZQ6DjQ7msGRnllTZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFm3GvfTB3mqs5oM54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRPwD4E9QV_O9mMdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]