Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't like automated machines that look like robots. Picture yourself as an im…
ytc_UgyJGIAg7…
G
dude, "im sorry" just means the AI understands that its statement caused a negat…
ytc_UgxJi6ndj…
G
I’m all for it. BUT I’d like to know who is held responsible when the 99% accura…
ytc_UgzzUFTCL…
G
Companies that cite AI are actually just outsourcing labor to cheaper countries.…
ytc_UgwwiLrUl…
G
I sort of disagree, becaue most of the no code builders out there, does 90% of w…
ytc_UgxT772yE…
G
I never worked at Goldman Sachs. I didn't go to Stanford or Harvard. By the laws…
ytc_UgxSJ-6p_…
G
It will actually become impossible. Say you prompt the AI to write in your style…
ytr_UgyE3Uevc…
G
Dude everybody dies on this AI CHANNEL 🙃 EITHER WE HAVE A SERIOUS DARK ARTIST ON…
ytc_UgyJJpxDE…
Comment
A super intelligent machine could analyze human history and understand the complexities of good and evil. If designed with the right intentions, it might prioritize actions that foster good outcomes for humanity. However, if programmed with harmful objectives, it could pose a risk.
What kind of safeguards do you think should be in place to ensure AI acts in humanity's best interests?
youtube
AI Governance
2025-12-24T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy4T5UmRxtAqFNsuLh4AaABAg.AR4eVOubbApAR6yA87uKL1","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugwg8JX0OF3QY5D3TQl4AaABAg.AR4dVGMqOtNAR6yeo8pz6N","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwZAMsl34DUaMr9JnB4AaABAg.AR4T-cFn_l7AR6zKhRGsZ0","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxMiRa84AqKg1o9LH14AaABAg.AR4EzBUqcWxAR7-uY8wYqA","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzsVMLmwaMXcOJRqbJ4AaABAg.AR4CofgPducAR70WHKW6UI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzVRWFxrX53EdkF5KZ4AaABAg.AR4ADqaGHSHAR717tO8Nip","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzoaI5hHkB34Cafyc14AaABAg.AR3xzM8Hmd0AR72Dg-09XW","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytr_Ugzb2AFfMfIczZsuA0l4AaABAg.AR3rxHiyXu5AR72j8nO2rp","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgyepHd5dAiNNR70POp4AaABAg.AR3nJcBxn8KAR73_-aCD4x","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzfpPRmuOnv31pHTNx4AaABAg.AR3Xf6P9gZrAR7B9rnMRM8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]