Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI killed a whole ancient project. Well Internet Ancient. NaNoWriMo
Sure that or…
ytc_UgzpPMigF…
G
The irony of plumbing as AI safe job is misunderstanding that the current plumbi…
ytc_UgxadmVXi…
G
Hello there, I'm future linguist... perhaps... I feel like I'm the first person …
ytc_UgwRvSQrP…
G
I thought AI because the eyebrows follow too much of a pattern. Looks like a sol…
ytc_UgxuNolns…
G
Even a microwave is closer to a chef than ai “art” is to art, you still have to …
ytc_UgxOrsUcg…
G
idea: just dont tell the ai what it is or what an ai is and ask it if it can tel…
ytc_UgywWIBnC…
G
yeah man, my wife is an artist and she even was moved by AI art. These annoying …
ytr_UgxK7_rE6…
G
This is an absolutely great talk that singles out the key open points about arti…
ytc_UgzJbLa1n…
Comment
For AI in software form to threathen our existence we would first need to be stupid enough to give it our most critical tasks, for example the most famous one , pictured in Terminator movies, we would need to hand it over the power to launch nuclear missiles without any or very limited human input so that it could trick a human leadership into giving it such approval, causing a war that decimates our populations.
Now ambulating physically existing robots is i believe a different territory, we still won't see for at least a few generations a time where these robots could remotely threaten our existence, not because they aren't powerful enough, but because they simply do not have the numbers to physically dominate us, and as these coming generations further refine AI , they would undoublty make sure that this possibilty of a rogue AI is further reduced.
youtube
AI Governance
2024-01-17T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxLqRbbs_-MdbP5QzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyIZlsfYVw8XtGIYs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNLMVsbBS1nZUXVFt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6bVrribY5zSP4a_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwY_eVMdEmt9omMF1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLxca36Zy2OTRIlC94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOsJFCNJ1El_a7L3J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyt1ud4GSrts4rNn_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAIRYx9ECF8yJsmvF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzSafEDdvuONSeE_Wt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]