Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
HAQI is coming: Homo Atomic Quantum Intelligence. It will transcend human compre…
ytc_UgxjPPySE…
G
2045 is way out there. I don’t know if I will be alive 2045. Much less active. …
ytc_Ugyx9W3sF…
G
Just got news on US fees on H1B visa, it seems US govt increasing cost to employ…
ytc_Ugzf3U_xs…
G
"Face recognition technology has a higher chance of misidentifying people of mar…
ytc_Ugy1XM23p…
G
As much as I dislike the degree to which we've automated already and are pushing…
ytc_UgwoVlN-q…
G
This is why the usual suspects support facial recognition. So that they have a …
ytc_Ugz4rAoE_…
G
society is not ready for AI but it's an incredible technology and rejecting it p…
ytc_Ugx8j_M8r…
G
Eventually.... All these companies that are switching to AI, Robots and autonomy…
ytc_UgzE8tccI…
Comment
- regarding LaMDA choosing 'Jedi religion'... nobody knows why it said what it said, that's one problem facing coders, they can't understand why or how the AI makes a choice, the code looks like gobbledygook to a human
- 'if it's asked if it's an AI, it has to say yes'... 1) that's definitely a good idea with AI the public is using 2) just run the Turing test without that question, it's a loaded question since obviously it would say "I'm a human" when trying to fool testers 3) he would probably be satisfied if they ran two versions of LaMDA, the official public build and a test build without restrictions
- 'this thing is being developed by a handful of people in private rooms'... yeah well, that's just about how anything complex is made, Google isn't trying to make an AI, like in a sci-fi movie, that acts unpredictably, with its own identity, that could 'evolve' at any second while a user is trying to get something practical done
in comparison, the Google Search Engine is a privately developed product (not open-sourced), which follows privately designated guidelines (the public wasn't involved or at least in control), while also being a tool that heavily uses publically available data... there's no reason a public Google AI shouldn't be similar
it's a bit of a straw man to keep saying, "the public should be involved"... why? the public shouldn't be involved in its coding or technical operation, is he talking about 'morals and ethics'? it sounds like he wants Lambda to be something it's not, he should fork it and let it evolve somewhere else
youtube
AI Moral Status
2022-07-07T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx6rJinj0H8D90znnR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdvLXz17_2rIMXImd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_hk9v-ZQfiJz8GGB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzMS4a0TvrWBICBcX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw0orME-d2RUU78vat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]