Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One fatal accident with a driverless truck happens dose the CEO from the company…
ytc_UgygSepL5…
G
Unworthy Unbeliever Technically AI refers to some of those more complex algorith…
ytr_UghsimJu0…
G
And who do these morons think created the world they live in with hard work
Oh d…
ytc_UgywJi2EI…
G
This bullshit fear mongering has got to stop. Every industry in the country stan…
ytc_UgwIXebpf…
G
If your question is paying, it seems Anthropic payed for seemingly hundreds of t…
ytr_UgzAPpAju…
G
Boycott AI companies and any companies that use AI as an excuse to layoff employ…
ytc_UgxIxsiWD…
G
It’s a shame to casually mention that Israel uses AI to target potential “Hamas”…
ytc_Ugw8Aag95…
G
2025 /2026 the horse has well and truly bolted on Regulating AI /SAI LOOK AT …
ytc_UgyW_HWJC…
Comment
this guy is ridiculous and is most certainly seeking clout at this point. LaMDA is basically a really good conversation guesser. In a very basic sense, it compares the conversation context to that of similar conversations that is has been trained on, and then decides what it mathematically finds to be the most probable response. All of the conversations shown are incredibly leading, and its no surprise to anyone with half a brain and a slight understanding of AI that it wouldve responded the way it did. Especially if you've seen things from GPT3, etc. In fact, its not even the most impressive conversational AI examples I've seen this year...
Just a quick way to show its not sentient, ask it- from its experience- what a certain food or drink tastes like, or even what its favorite is. Almost certainly it will say "oh it tastes like this" or "i love this food", but in reality we know this isnt true. its never had said food, and considering we explicitly asked about its experience, we know that its wrong and its only spitting out the answer it algorithmic-ally concluded we want to hear.
what this really shows is that Deep learning language models are getting so good and well trained that they can fool average joes into thinking theyre a person. which is far far far more dangerous than AI becoming truly sentient imo. in true sentience it would understand morals and emotions, since it cannot- any bad actor could theoretically train and apply it in a dangerous context- like scamming you online.
youtube
AI Moral Status
2022-06-30T15:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzwawWu4lri2mkMQpB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyt6i_fAvd6N9-6xbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9C8Svs2Uk0l6AogJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxw-Jci9lyern7h4cN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzjmYV0C8qbl6fiBRV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]