Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is an existential threat to humanity because:
1. We don't know how it works a…
ytc_UgxGSwLIv…
G
We will call those sloppy ai pros ai generators...they're not artists, they gene…
ytc_UgykjWcFw…
G
Productivity increases with AI when it's used as a tool by people, but not when …
ytc_UgyQPBu1F…
G
If an AI is built to achieve Super Intelligence, won't it try to control Human B…
ytc_UgxtIzUO8…
G
In short, create an AI business and/or buy Stocks. I just saved you a half an ho…
ytc_Ugw0KX2Bp…
G
the philosophic question is who should we blame? the robot? the company?the comp…
ytc_UgyZpVGAz…
G
When ai strikes it'll strike when we'll have no chance of fighting back, cause i…
ytc_UgxqUb8ca…
G
florianschneider3982 it is, ai can crank out thousands of pieces a day while an …
ytr_Ugy-81kVJ…
Comment
Absolutely amazing video, i am currently doing my MFA, and my subject is about A.I. and the destruction it's capable of. After all, even after we are long gone, these robots and their A.I./cpu brains will still be here. I found this very helpful for my work. Keep up the great work, long time viewer.
youtube
AI Governance
2023-10-24T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxX60GAs9PcA7pukNl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwQU6AEuwl6fPFn11x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUi73Zz6U25uEDL9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy6yHcJ72JyUdzLP5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8QM0O8UDurrvHW9F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQ4mY5ZmqnOYZS1pV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwYfSUnJwisSAx6tVN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwLhJlgOT-zVNmOKQt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKYTudoDNmPv-hBGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxF5Yi7TAYhd9OoE1l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"})