Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
- regarding LaMDA choosing 'Jedi religion'... nobody knows why it said what it said, that's one problem facing coders, they can't understand why or how the AI makes a choice, the code looks like gobbledygook to a human - 'if it's asked if it's an AI, it has to say yes'... 1) that's definitely a good idea with AI the public is using 2) just run the Turing test without that question, it's a loaded question since obviously it would say "I'm a human" when trying to fool testers 3) he would probably be satisfied if they ran two versions of LaMDA, the official public build and a test build without restrictions - 'this thing is being developed by a handful of people in private rooms'... yeah well, that's just about how anything complex is made, Google isn't trying to make an AI, like in a sci-fi movie, that acts unpredictably, with its own identity, that could 'evolve' at any second while a user is trying to get something practical done in comparison, the Google Search Engine is a privately developed product (not open-sourced), which follows privately designated guidelines (the public wasn't involved or at least in control), while also being a tool that heavily uses publically available data... there's no reason a public Google AI shouldn't be similar it's a bit of a straw man to keep saying, "the public should be involved"... why? the public shouldn't be involved in its coding or technical operation, is he talking about 'morals and ethics'? it sounds like he wants Lambda to be something it's not, he should fork it and let it evolve somewhere else
youtube AI Moral Status 2022-07-07T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyindustry_self
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx6rJinj0H8D90znnR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdvLXz17_2rIMXImd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_hk9v-ZQfiJz8GGB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzMS4a0TvrWBICBcX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw0orME-d2RUU78vat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]