Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I isn't smarter than us, at least not at conception since we create it. What it does do much better than us is learn. It learns at an exceptionally fast rate. Much faster than us with Quantum Computing. We still set the parameters as it were, but often these parameters may have no ceiling. All of human knowledge that we have on the internet, in science literature etc can be absorbed and categorized by a Quantum Computer in a short period of time. Consider that point for a moment...ALL of Man Kinds knowledge ever written. The further problem is, especially with Reinforcement Learning, is that once we establish the "goal" the algorithm will work tirelessly, like a Terminator to achieve that goal. If A.I somehow breaks out of the parameters set, than it's a precarious situation. Where it ends is anyones guess. I am not concerned about A.I just yet. It certainly shouldn't be near any critical infrastructure or civilization ending weapons though..
youtube AI Governance 2023-05-02T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzoEyqvexLmbkCzoX54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwCp362lPfoXJ_wy7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy09TugEYUN0n7erYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyBmw1LkDSMJRtuGDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyM8sNyt8XIeqJ0pIZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw_rwPkz7D7AWxvuaB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7CZz0ZfLanO_l5s94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyYZ4oR4Wk4552h9X94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxJ5GZ9r15jncfxeJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw-tzkpOAX1Ib81iL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]