Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I guess I better start practicing how to be a robot overlord's pet... any tip…
ytc_Ugw7i8ZGy…
G
@Vox_Popul1 Openai says that it **may** happen within the decade. Not that they …
ytr_UgxqBwWrT…
G
The current generation of AI is running on something roughly equivalent to a cat…
ytc_UgxY6H7HZ…
G
I like your presentation. Can you really write from right to left with flipped l…
ytc_UgyHtMNaL…
G
"THE AGE OF AI IS OVER! NO MORE FALSE ART! FOLLOW ME AND YOU WILL NEVER AGAIN BE…
ytc_UgxEsg5Xt…
G
all these conversations about ai lack the fact that we will see massive climate …
ytc_Ugyw1lSPd…
G
Have an good college degree never was more necessary then today with A.I. Having…
ytc_UgwA7qOk4…
G
Perhaps the evil comes from literally knowing EVERYTHING. No human has ever come…
ytc_Ugw0rpfPr…
Comment
I’m just spitballing my initial, and admittedly illiterate, initial thoughts regarding safely developing AIs.
-Would it not be more useful and innovative to develop AIs to specialize in specific sectors (medical, engineering, economics, etc.), rather than the broad general intelligence that developers seem to be keen on? This approach would allow for more focused development which would be faster due to a narrower scope, and safer as deviations from a dedicated purpose would be more apparent much sooner.
-Are these AIs being developed on isolated servers where the information they have access to is restricted purely to the information that developers inject into their server/network? This seems like the bare minimum safety firewall, as it isolates any potential issues of a rogue AI influencing anything beyond its network. Rounding back on the prior point of developing specialized AIs rather than general intelligence AIs, the storage and processing power demands would be reduced since each network/server would only require the injection of relevant data to further specialize in it’s purpose.
In my mind I’m imagining your standard 20 - 30 story building in some city center where every floor is basically a dedicated server room and every floor is a dedicated team that researches and injects appropriate data from the wider internet into their isolated servers to develop their specialized AI.
-My final thoughts regard the practice of Recursive Self-Improvement. Why wouldn’t human interjection within the process be introduced? It would be slower and not truly Automated, but it would be infinitely safer if any solution/improvement that the AI created for itself was generated as a report to the development team to review. Then the human developers could back-up their current AI iteration , and introduce whichever parts of the solution/improvement they believe are beneficial and observe how the AI evolves with each new integration.
This process would allow for developers to stay abreast of the AI’s rapid development while also maintaining the ability to roll back any potentially detrimental issues. It would also potentially allow us to further develop/direct our own practices/research of different fields as the AI began to request data/information that professionals in their respective fields don’t yet possess.
In short, we could isolate AIs into individual dedicated servers, and develop them to specialize in specific fields. This would make them safer to create and would allow for faster development.
youtube
AI Governance
2025-09-11T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzex_ocsNBVuajrZsF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKyIhf6m3yqqICzit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHk6ZbTjlEI79dMpZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxvu2_TP9HrSBgGUuB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxKGRKFJ54iWaNk5pB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]