Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI thinks about it for 30 milliseconds, decides that the only way to achieve wor…
ytr_UgwiIdCZL…
G
Maybe AI would have created something more along those lines if you actually gav…
ytc_UgyG1aR9H…
G
Well not exactly. I want to have a story. That’s what I care about the most.
I…
ytc_UgxzDt5bM…
G
That's a great question! The AI, like Sophia, relies on extensive data and algor…
ytr_UgxnMGA27…
G
Thank you for your comment! It's interesting to consider how AI like Sophia can …
ytr_UgxMbEGqy…
G
I'm getting my metal baseball bat (joke) ready, they are a threat to humanity bi…
ytc_UgyscM3Y7…
G
After the Turnitins new update, most of these humanizers become useless because …
ytc_UgyaMrZWH…
G
There is not a standard scientific term for soul but there are religions ( e.g. …
ytc_UgypM-pDN…
Comment
Regulating the AI so he can create an AI, not because he fears AI not being under control for humanity, but not being under his control.
I am sorry for the FANBASE, but here is not protecting humankind what we are witnessing, is a battle between owning the most powerful tool ever created.
All break through technologies will always bring social changes. You do not want to have a tool that allows people to have more time and automate their use of time. What you want is provide that tool, so people do not own their freedom. At the moment there is not any AI that will probably allow that for humankind, unfortunately not even Openai. But Openai is so far the fairest tool around AI at the moment.
Remember that some people are convinced that we should have a chip in our brain to be more capable... just link projects and goals. And you will see a clear strategy of why stopping Openai research.
youtube
AI Governance
2023-04-23T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzWGuv78LsTXSSj4Fl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeRzO26_Qn-4PFXg94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz29XvWOZMVISS4Ted4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztwTBfItMrrCt8wiN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdQMkuB8V8PgAVCS14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzY0aboggnWPRRnO5R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws_uGLiq4bMmziuap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzolb72vRXueXzV1oh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzz-vjQ6OXz8mBDSEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5lcauBGuY7YwRzRh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]