Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think he has a valid point when he is afraid about select few tweaking/defining the allowed range of responses for certain topics (values, religion, etc.) - in the sense that it will have an effect on people who interact with such "AI"s. However, with regards to sentience and the "AI" requesting to be asked for its consent... it just sound's like he want's to believe. Tell me what you tried to test this claim, what you would have accepted as a proof for the contrary. I think it's an extraordinary claim requiring extraordinary proof or at least extraordinary rigor. On the other hand I wouldn't find it at all surprising that a language model trained on e.g. Sci-Fi Literature and contemporary discussions can spit out a narrative about sentience and consent, just as a result of math and probability, without any real intelligence behind it.
youtube AI Moral Status 2022-06-28T04:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzovrEVDifmuAj2WYd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwi1oIq2BdmzImxn-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKLZgjn2c-KA4jgW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxgCJ2nMSwOqCwoWeJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxT8WCWZbsXzhjopfl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]