Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The is the world most advanced robot not the US made robot. The one is not like …
ytc_Ugz7ptOH1…
G
Ai will never be able to copy the emotions artist put into their art. Ai will ne…
ytc_UgzvZ8nAO…
G
Robot 1: oh no I messed up
Robot 2: YOY MOTHERFU-
Employee: Ayo chill-
Robot 2: …
ytc_Ugxdj11Aq…
G
FSD has been around for a while with Waymo and other companies. And if you’re ta…
ytr_UgwFKhqdy…
G
This is assuming the AI reaches a point where it no longer needs us for maintena…
ytc_UgwL0JQHj…
G
I'm no professional, but I've always liked drawing. However, since the advent of…
ytc_UgzKK8UTr…
G
My friend said, the biggest prank you can do to a young person is to make them g…
ytr_UgxcDbUxV…
G
LLMs are just auto correction. They are not smart and cannot really "think". The…
ytc_UgyJa2voJ…
Comment
He brought up the core of the possible trouble: since the media is after "molding public opinion" suitable to the desires of their largest advertisers/politicians, AI will become polarized in its interaction with people. Blake Lemoine has a soul, and has a very valid concern that AI may become relied upon by people for opinions. So far, we only retrieve info from computers (good and false info), but AI will likely reach a status above engineers and professionals (speed, wide range of data banks, legitimacy of info), and so AI's conclusions and opinions may very well become "God-like". Look at the polls "AI good or bad". Any poll results close to 50/50 speak of a large degree of unknown coming from the respondents. The highest positive comes from Korea, who spearheads a lot of IT (more welcoming to AI). So, it's not about AI, the concern is definitely about who will program AI, and what "bias" will it be instructed to communicate to people. How long will it take before these "bias" opinions perfuse and convince people of its "gospel"? Well, that has already begun. Pit Bulls are "nanny dogs" to some, and "trained fighters" to others, entirely depending on training and who they guard or attack. I sure don't have a say in what's coded into those AI, do you? So, unless the authorities issue "ethical" limits, these little "bias" may amount to something like Colossus (movie), or the "Supreme Intelligence" (Captain Marvel). That's what Elon, and most of us, are concerned about. So, yes the problem most likely can come from a wealthy few who are at odds against the greater good. But it could also be so wonderful (I Robot, which ends quite well). Any peaceful and balanced integration of races and ethnicity (in rights and freedom) is the natural trend among the average folks like you and I. Azimov's Three Laws of Robotics can be obviously augmented with algorithms pondering ethical decisions, like we learn as well (and debate them too!) . Hoping for the best. Thank you very much for this great review of LaMDA.
youtube
AI Moral Status
2022-06-27T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxM-oBMaluhBShyc0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzpvYM3X3f_zm-wwv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCwxqL_mE6kMaL6xB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzmvskzs9I4nhDgq0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTb1PdR1Dk2ldHDQZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]