Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dr. Yampolskiy is not a fraud....not even close. He's a credentialed researcher …
ytc_UgzmMCjRR…
G
@thejumper8496but then again, how would the ai do it without our art. If all of…
ytr_Ugyd6nttW…
G
How can an AI train itself without access to grounded and high-quality experienc…
ytc_UgyY9j5CH…
G
Zuckerberg is an AI robot. Only a robot would ignore human safety from AI robots…
ytc_Ugy5sCZ6d…
G
eemm, how exactly he being "destroyed"? like showing that AI art sets trends. An…
ytc_UgzXSbpdY…
G
Not to mention the bottlenecks that will make in retraining. I've lived through …
ytc_UgwwwYcIS…
G
How much would a male robot be that looks like Jason Mamoa ??? Juuuussssttttt wo…
ytc_UgzqwmMPw…
G
one issue I've thought about is how we might give conscious AI morality. that wo…
ytc_UgyQB5hgH…
Comment
The real threat of ai is that it is turning humans into livestock. Leaving them so dependent on computers that they can’t even function on their own or tell the difference between reality and fantasy. In the sci-fi movies this tech is developed by trained professionals who know what they are doing and have magic systems to stop this chaos. In the real world ai is developed by every idiot with a smartphone. If nothing happens first, humanity will be driven to extinction by virtue of incompetence. My money is on a society that uses tissue paper soaked in glue as emotional armor using their entitlement to trigger a nuclear war with bad jokes. Even ai imagine generators can destroy the world in moments. Past midnight, we are past 8pm 20 years later.
youtube
AI Governance
2023-07-07T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4lUFURAGaZqrHd_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz6-5EXoXIe_VcdKul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzRvOqBv7hJD_1jzp94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_Ri-_VcQRkE_jVP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwp5EsFfQaq-fRsFe94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxgKgxy-0EoRz9iRHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2sAuM7xcD1bhdry14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFi8zXC6vHeea4NFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxA-ouDTDqNRMZBp0h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwKDY5a34Rqblzazh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]