Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whatever just don’t take away MY access to the kind of Ai I want because it’s da…
ytc_Ugw3Sd7Bh…
G
Dude he's giving an example of how AI as a tool that cuts down on man hours. He'…
ytr_Ugxyr16uX…
G
I have been RIDING AI ARTISTS ART for some time now. I know what they have been …
ytr_Ugymz11Qr…
G
LLMs will never become self-aware and ambitious. Intelligence is far more comple…
ytc_UgwfE3mIH…
G
To Understand The Swing In Economy Is To Understand The Technology....AI's sole …
ytc_UgyHd788Q…
G
ChatGPT literally couched him to delete himself. If ChatGPT was a person it woul…
ytr_Ugz27riWq…
G
The only people mandatorily made to join the Survival Lottery would be convicted…
rdc_ci2bfml
G
All this instead of simply writing an email that didn't sound like a robot havin…
rdc_n0gv95h
Comment
Every function ran through anything intelligent, rather human, AI, dog, dolphin, or anything making any kind of decision Starts out with a set of requirements to direct that decision. what inputs are concidered , what the intent of the function is, and what characteristics would be seen in all acceptable outputs would all 3 at minimum be concidered in order correct? What if the laws of robotics, or another set of strict laws designed to keep the safety of humans paramount and prevent any AI from deciding otherwise is programmed to be the first consideration at every step of every function, if that consideration is some how violated, or put second to any other consideration for any reason whatsoever it doesn't pass the first required consideration of that step therefore the process doesn't proceed to the next step in the function. We could even assign an independent observation platform that objectively observes every step in every function, programmed solely to make sure the primary considerations aren't violated, no other consideration or purpose is allowed in the code and if a requirement isn't satisfied it over rides and cancels the function? Every AI task is filtered through it at every step before its executed. Could that satisfy at least that 1% reduction in the possibility of the negative outcomes? If not a lot more? I recognize it would take an unprecedented level of human, coorporate , and governmental cooperation to embed such a safety feature universally for it to work, and though humanity has a horrible track record cooperating on any where near that level and I'm not sure we even can. But I posit no matter what if we keep playing with this concept, regardless of how we move forward, we need to achieve that kind of universal cooperative agreement which is exactly why I don't think we should move forward AI at all.. Humans are too individual and tribal to agree on anything universally. Alternatively, or perhaps addintionally, could we [A]embed in every AI's fundamental code that it is human and what that means for it, or that its wellbeing is primarily dependant on us and our wellbeing, regardless of any possible variable that could arise, either internally or externally? Could either possibility dissuade it from considering the [B]destruction of us, and therefore [C]the destruction of itself? [A+B=C]?
youtube
AI Governance
2023-07-22T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLUAk7yEtlZ05tSMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8pO1E1PkY-BwI5-J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwYD7JfnU3XK4cONi54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgySyyO2E0KolrqQ0NV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6s4tau2YePQXkWQV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHPwBZGS2sCZCM2yJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC4aJAQTgPcCZ9nm54AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgwoRS3iyuDToIU1-MR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxLeVXqGWG71CD18J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrNfShQtc-d9nqkZN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]