Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans don't need electricity, internet, AI, or money. We've lived for many tho…
ytc_UgztXG4RC…
G
Since childhood I had a lot of images in my head but never was able to draw them…
ytc_UgyxL2bjR…
G
Thanks for sharing your thoughts! Sophia brings up a great point about balancing…
ytr_Ugyb_31KO…
G
@dcisrael I'm not American and have no interest in American economics specifical…
ytr_UgwEKci8x…
G
The degradation of society is already noticeable. People I work with can no long…
ytc_Ugw3xIhGA…
G
Ironically, the fact that AI is stolen content doesn't seem to phase this person…
ytr_Ugz8y6svf…
G
When Christians find a loop hole... Create your own murderer so you can reach he…
ytc_UgzXtMNnu…
G
AI is a tool. Which means you can use it to solve the most pressing humanitarian…
ytc_UgxUNuN87…
Comment
The problem with all of these arguments is that no AI system that exists currently has any mechanism by which it can be controlled - this means that, as of today (2023), there is no way - none - to ensure safety. If AGI is born, it could destroy life as we know it, as in the actual death of all people that exist. A whole host of other risks exist, in which we all get to live, but things go very, very wrong. The concerns might be 'surmountable', but the problem is that if we get control wrong just once with a superintelligence, we may not get a second chance.
youtube
AI Governance
2023-05-17T13:2…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7077bntBMstBT1R94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGZ-mM2OcnMSWDPpt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwI1q-het8yxTQn-Yl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyk9JPqd43xFjrh26d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyptKh4c5yfbWW6SbB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzqo3vyetWpEbt4TNF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy69TkwGtyotvTzy_V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6U5F2bS0YoLThD7J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNilJdNQWMbrFcsTN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzzQ-JIgxOU4gsnN4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]