Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Self limiting factor of "all jobs gone": people just stop buying all non essenti…
ytc_Ugy7oMxXE…
G
It's about time. Truck drivers know their end is clear. Will they train for an…
ytc_UgwGNh-c8…
G
Moderator is kinda right. For now, AI will only replace the bunch of people with…
ytc_Ugyngh26N…
G
I use AI via api daily where even simple instruction are misconstrued or ignored…
ytc_Ugz8Ub9WX…
G
If you asking if its different or same in terms of replicating, it's actually qu…
ytc_UgzWaedvy…
G
Quantum computers as well as quantum mobiles will change the course of how the A…
ytc_Ugy-WQvlb…
G
My view of art is that it’s a piece of the artist itself, people can start art a…
ytc_UgxVIrA_s…
G
Most people who look at these paintings don't know the history, so functionally …
ytc_UgwW-HChz…
Comment
Quite honestly AI ending humanity is a miniscule risk when compared to the risks of AI supplanting the need for humans to do any kind of labor for money. There is no question that AI will become more advanced than humans when it comes to intelligence level, but AI will not have to actively work to cause chaos in human society; it's mere existance and ability to work smarter, faster, and cheaper than humans will itself create an envirnment that is very unstable.
With regard to stopping or slowing AI development, this is a fools errand. If the US or even all western contries stop AI development, this will not stop Russia or China, in fact it will only help them get ahead. Edward Teller said that his great achievement was not that he invented the hydrogen bomb, others would have and others did, his achievement was that he advocated for it be created. He understood that if it was possible to do it, it must be done to know of it's possibility, otherwise advisaries could do it first and have an upper hand that nobody else knows is even possible for sure.
youtube
AI Governance
2023-07-07T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw1HWjtEKJVk0WesGV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtduOYBrkxXaO5cel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA9IKk__nMAtuv_Zl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyxIsIIvIrG447emjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzsvXdqlCdkhyQL-oZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwx9utrvoFxs8CeO014AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxtC8td0Onwam4okhB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw5kT3-QpYVoI8CcZ94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp7u2Q1_X8MiBC1d94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwBBnM3HHSVqL9_6bB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]