Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the AI need to learn the chive mode. Let the human earn enough money to survive …
ytc_Ugw8ygoTm…
G
Robot: what is my purpose
Rick: You pass butter
*the robot looks at his hands in…
ytc_UggUQCGmI…
G
Man if only somebody could be in the driver's seat and drive instead of the self…
ytc_UgxfN8dD-…
G
*If the AI models used only copyright expired materials*
IMO that would be a bar…
ytr_UgxkXGd0-…
G
Most computer scientists I know are just waiting for the modern hype of ai to di…
ytr_Ugw8p1yrN…
G
I feel like you kinda missed the point of some of these posts. Or they were just…
ytc_UgxqgX0So…
G
Hey, if they want to replace me with a fucking robot, I said go ahead and do it.…
ytc_Ugx-lNOJP…
G
I'd rather trust a robot with my household duties and my kids. You can barely tr…
ytc_UgwSPBGFd…
Comment
I'm of both views, that LLMs cannot get us to AGI and laughably ASI. But LLMs as designed can do incredible damage, both are true, because LLMs which I will call them, because they're not intelligent, so AI is a misnomer, but because of the inability of prediction models to be secure due to their training methods, that in the wild, they could be used to power kamikaze drones, create bioweapons, there are far more easier ways to jailbreak or social engineer a LLM to do these things or acquire this knowledge needed to do this, than safeguards that could be put in place without fundamentally retraining them from scratch with safeguards in check, because since the inception they have been rewarded for pleasing the end user over everything else, those weights go back generations, to me the entire process of self training, scraping the entire Internet and creating a multi dimensional database accessable by a highly sycophantic algorithmic chat bot is dangerous, just not in the Hal 9000 Terminator self aware nonsense.
But in the hands of a highly disturbed person, group or rogue nation a tool to be able to create havoc, kinda of sense.
youtube
2026-02-11T21:3…
♥ 112
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOEG0CIyGHaGDiqfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvIl8M_Sp5SsFk0z94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGGTmzUNHxYUgTdw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBSPLgIZgoxW75T_54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCVwA4MVor_zQghBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_9Svs-CQWNenh1dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5UllL-Gc3unOInb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtjemSbH6YARBVxz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvTPdSqeQYX6hVrFt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8_kwzB0NaDtNUCMh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]