Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans will need to invent, keep envisioning new AI projects for machines and so…
ytc_UgyiogqCp…
G
There is another reason why rich people invest in AI. I'll start with simple exa…
ytc_UgyLURP-R…
G
I’m an artist who doesn’t know a lot about AI technology. I post on tumblr mostl…
ytc_UgzOkbWUc…
G
Everyone’s just trying to live a happy life and here come corporations racing to…
ytc_UgwzqKlpP…
G
We don’t know if system one/system two is an accurate representation of the brai…
ytc_Ugy9uFOZi…
G
The reason there is a double standard is simple: People who cannot create with t…
ytc_UgxSdWq_W…
G
I'm so glad this topic was brought up because I've been thinking about this for …
ytc_UggeerIRJ…
G
I'm so prepped for AI rights. I watched Cloud Atlas and I was like machines are …
ytc_UgiLNSy2w…
Comment
I love the traditional right wing Tucker jab to the left at the end of this video. Elon is absolutely right on with regard to the dangers of AI to humanity. The dangers may well outweigh the good and with NO regulation the writing is on the wall. AI will most certainly be weaponized.
youtube
AI Governance
2023-11-02T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyXjTzLPsiVvs1f-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwdrxT1yIN3xcrj4q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxd1vvjnuueT7e6dPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWWlzCnaE1ul-0zaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxk8hyXb-KQj8RqVJl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxazyOYSiLsGQM2x9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRRuR2zqqe1_Om22x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz93K-qM5KUdXOFvtx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwHb1tUzIvVBygiQRd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySoa9MwZHrkHkuDaN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]