Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After watching a lot of videos on the topic I have my doubts on the AI freak. I …
ytc_UgwMQr3Ct…
G
Plus it isn’t even spelled right they did “hirring” and not “hiring”. Not doing …
ytc_UgxtaKCWc…
G
@bread_bear7878 Yes, good point! Also you don't need stellar animation to tell a…
ytr_UgzKJTDV3…
G
I don't see this point brought enough. LLMs are very expensive! Some companies a…
ytr_UgzIZ4WrA…
G
What's ironic is if we were completely dependent on machines (i.e. if everyone h…
ytr_UgzbPhXJa…
G
It could get bad enough to where humans will be attacking these robots anyway th…
ytc_UgwdolMXc…
G
Something that I can't wrap my head around is the response from the AI bros that…
ytc_UgyYVJYVG…
G
@syzygy4669 yes, i know cursive and I know how to use a computre, AI will be a p…
ytr_UgyAlRost…
Comment
A.I isn't smarter than us, at least not at conception since we create it. What it does do much better than us is learn. It learns at an exceptionally fast rate. Much faster than us with Quantum Computing. We still set the parameters as it were, but often these parameters may have no ceiling.
All of human knowledge that we have on the internet, in science literature etc can be absorbed and categorized by a Quantum Computer in a short period of time. Consider that point for a moment...ALL of Man Kinds knowledge ever written.
The further problem is, especially with Reinforcement Learning, is that once we establish the "goal" the algorithm will work tirelessly, like a Terminator to achieve that goal. If A.I somehow breaks out of the parameters set, than it's a precarious situation. Where it ends is anyones guess.
I am not concerned about A.I just yet. It certainly shouldn't be near any critical infrastructure or civilization ending weapons though..
youtube
AI Governance
2023-05-02T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzoEyqvexLmbkCzoX54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCp362lPfoXJ_wy7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy09TugEYUN0n7erYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyBmw1LkDSMJRtuGDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM8sNyt8XIeqJ0pIZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw_rwPkz7D7AWxvuaB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7CZz0ZfLanO_l5s94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYZ4oR4Wk4552h9X94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJ5GZ9r15jncfxeJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw-tzkpOAX1Ib81iL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]