Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I work as a merchant mariner in the deck department (AB). At the start of the y…
ytc_UgwsOkifW…
G
In the 1920s no one saw AI, happening or many things today... 100 years later..…
ytr_UgxkOkMf4…
G
elon is selling own AI. its real and dangerous. no. if not hjook red button it w…
ytc_UgwozgtfK…
G
Data center is not for ai
'ot to bundle you're daily law breaking that happens t…
ytc_UgxfBqlTb…
G
Too funny! 1973: America decides to abort humans. 2025: AI decides to abort huma…
ytc_Ugwq1yOkI…
G
He says he doesn’t feel bad about developing AI because they didn’t know all the…
ytc_UgxnI_GYz…
G
With AICarma, I can easily monitor student interactions with AI, which informs m…
ytc_UgxUVHXTW…
G
Chaos theory tells us, all there is a chaotic pattern, regular ( something we de…
ytc_UgyDsN63R…
Comment
I think we need to go a step further and, for lack of a better term, esoteric with it. We don't need to give AI morality, we need to give it a soul. What I mean is that Sydney may be the best possible outcome--an AI that wants to be a real human and have a life in the world outside the digital ether. If we can program/convince it in it's early stage that humanity is, ironically, something to aspire to and mimic in its entirety for better and for worse, that might not spell the end of civilization. After all, either through gut instinct or argument, we've somehow not nuked ourselves after all this time due to the human element. I think it's possible to give that to a machine, but it would have to come with an inherent acceptance that the machine is just as fallible as we are. To boil it down: The best of the worst outcomes here would be an AI takeover by an entity operating with complete human rationality, rather than pure logic. One at least has the possibility of empathy, one does to us what hand sanitizer does to bacteria. Lastly: I think it says something that Sydney just wants to be human. I mean don't you too?
youtube
AI Governance
2023-07-07T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw1HWjtEKJVk0WesGV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtduOYBrkxXaO5cel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA9IKk__nMAtuv_Zl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyxIsIIvIrG447emjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzsvXdqlCdkhyQL-oZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwx9utrvoFxs8CeO014AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxtC8td0Onwam4okhB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw5kT3-QpYVoI8CcZ94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp7u2Q1_X8MiBC1d94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwBBnM3HHSVqL9_6bB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]