Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First, I understand what you are talking about with AI art. It is an easy way to…
ytc_UgyaO-YnO…
G
Thank you for sharing this story. When i read about it weeks ago my heart broke.…
ytc_Ugxk8LIb1…
G
Keep saying it and will keep saying it. AI is a bad idea and shouldn't be taken …
ytc_UgyQ92YyH…
G
Can a AI be kinder and more wise than human though? I am asking seriously.…
ytc_Ugwj_EA6C…
G
There is a serious lack of economic fluency in all these AI doomsday discussions…
ytc_UgxAYU-f6…
G
u dont have to use ai, you can just practice or use tutorials from other artists…
ytr_Ugy2Hmdmv…
G
If AI is already capable of hallucinating then imagination is not a far stretch.…
ytr_Ugyc-gRDI…
G
I AM STILL CONCERNED ABOUT THE ADVANCEMENTS OF AI
HOWEVER HERES A THOUGHT 💭 SUP…
ytc_Ugy6WOLX0…
Comment
Regarding the concerns about superintelligent AI etc, I think first of all that there is a case to be made for some mechanism to be there to cut them off. So for example if there is some research being done on some superintelligence AI there should be some big red button type thing to isolate it. It sounds cartoonish, but I think there is a rational case for this type of thing in the sense that if we set this up we can push the envelope a bit more in terms of what we do with it. Obviously the "big red button" needs to NOT BE SOFTWARE but something that cannot be software controlled or whatever, there is no way to disable access to either. Maybe it needs to be a piece of the cable bundle with a fire axe next to it or whatever, it should be something that is obviously destructive and has no quick reversal method.
I think there are many ways we can avoid ever needing such a thing, IF we all do everything right. That's a big muddafuggin "IF" - whoever has worked on a team developing software knows those awkward team meeting moments when one of more team members did not actually do what they were supposed to, and everyone elses code is only half functional at best because of it. This is why a fire axe or cable guillotine is needed. People don't listen, or they don't care, or don't feel like it, or are busy with something else, or whatever. We'll have the same potential or worse when AI is coding the AI that is coding the AI like an ouroboros eating it's tail but in reverse - we need cable bundle and way to cut it.
If we have it - we can monkey around with far more freedom. Ideally still being careful. The other thing is code portability - if we want to do "extreme research" - lets not do it in a way where the code is portable. So the development environment should have some intrinsic difference so the code that runs on it cannot run on anything else. Then the AI can reprogram itself to it's silicon hearts content inside its play-pen because none of the code can run on anything except those specific computers. There are several ways this can be achieved, I'm sure there are better ideas than mine for it too. It will limit the code portability but that is a feature, not a bug.
youtube
AI Governance
2025-06-17T13:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgybCbJ-5-yFLbz0W794AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxeJFPMNnGxHVP9XBh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugynkze7Qgc8wm1qPRt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnMTzxAAyKM2pJ-Gp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaTEMKZEy9lZh1YlN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwicizNCBTwLnGXvA54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypjwF9Ggy-turA80V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxy6_6Lp5jEuT0WaNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzn8Ve9HEmRkDZj0rN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwpsf9-RX34yDWT2TR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]