Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymos want to make money easely (without driver)... so neighbors should sue wa…
ytc_UgwyAve_V…
G
@LordofdeLoquendo I know a lot of people from Fiverr use A.I to sell artwork tha…
ytr_Ugz3nvbYF…
G
yes, have u not seen the terminator. admin jobs will vanish more will be claimin…
ytc_Ugw6P0gYE…
G
Honestly, while this sounds like a diss, you would literally be working for your…
ytr_UgwYAm3_F…
G
I mean, I just wanna see an AI get out of the box, take over a self driving car,…
ytc_Ugx1qLwBT…
G
In the end if big companies automated everything and everybody lost their jobs, …
ytc_Ugx_pr_w4…
G
Whenever an AI image DOES fool me next time, I’ll just assume that the model got…
ytc_UgzUHP0VA…
G
One would think that if this predictive policing was a successful police model t…
ytc_UgxaYKfVY…
Comment
AI regulations can really only be (potentially) effective if passed by the UN via the AI Advisory Body. If just one world leader limits it's own AI capabilities for the greater good, that will allow it's competitors to take the reins and lead us to the greater not-so-good.
youtube
AI Governance
2024-04-04T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBaHk_zS6K6gEX1it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNbaPbt0BOymcT8zl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8CxU6cl0eLjq31iF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzSNPS6ioVd0S3o2o94AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPhWlmKbwkuapO5Vx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuhxTp72yOF_GVPjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBNuD8zHs53FzA3UJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxWWG54GJmWymlTysF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzijq8vSkVr7MC4X1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw-WqjhVx_esUGdRSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]