Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What scares me is all these truckers that are fighting to live in caves when eve…
ytc_UgxZ0laS1…
G
Now if we can get something like this for voice recordings so A.I doesn't keep s…
ytc_Ugx3SmV0q…
G
Elon Musk finna colonize Mars and come back and colonize Earth with his AI becau…
ytc_Ugy49YA4y…
G
I am a tradesman but I'm not smug in the slightest. Who generally pays plumbers …
ytc_UgxhuRPA9…
G
I had an idea how to finance a minimum income guarantee (different to UBI in tha…
ytc_UgyGWoBnN…
G
Robot was firing in a row but truck has different kind of damage so acc to me it…
ytc_UgwOCtyEA…
G
In the middle of the video, i realized that after seeing all the ai studio ghibl…
ytc_Ugx3lgmoQ…
G
@zerolayne8245 Transforming (which flipping is also a transformation) requires …
ytr_Ugz4K3N14…
Comment
Oh I love the part where the boss music kicks in (military got hold of it 🤣). It's scary, but in theory whatever we call AI isn't AI. Then again I don't know about GPT-4.
Also... Just because we as humans could eradicate all flies and whatnot, we don't. While the tale of an AI going rampant sounds convincing, I don't think the AIs primary objective will be to make humans go extinct.
Since humans were smart enough to create AI or rather AGI, it'd be smart to keep humans around for if or when they have another bright idea, that might benefit the AI as well.
Improving the living conditions of mankind might actually improve the chances of AI to expand beyond earth. Sure, AI can calculate a lot or use existing knowledge, but acquiring more knowledge by creating it on its own is definitely slower than doing so AND having others do it as well.
I am not the type of guy, who will be looking for places to hide. IMO there's no use in mere survival anyway and if AI really wanted to kill us all, it'd have means to do so no matter where you hide. If the places were too hard to come by, it could send armies of drones or even nuclear missiles. There'd be no survival.
But why should it care to kill us? It certainly would damage the world and it'd cost quite some effort - what'd be the benefit?
As a programmer, I currently am not aware of any AI that has an initiative and acts on its own. It merely reacts. And if I know anything about humanity, then it's that we as a species have survived a lot and always came back more advanced and while there's a ton of crap going on in the world, there are a lot of positive things AI could learn from.
youtube
AI Governance
2023-07-07T09:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySKW176UPvripbH5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjY2dXlFoeIhNacMR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxaDIhRkSCKxtWw5nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRbWRzLCpjC675Vs94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFrtnsGVlYL77Hf-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5mTfOeUxvWBJMBP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwgWQElQQR_t2y_Uo14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlnSezJ9FGb_BLqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPX3-Gh9zoltjM77V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgycQu9Gv_dnxZkCA4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]