Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
these days u dont get much good news when the title of the vids say "AI", just a…
ytc_UgwBwkWX4…
G
The problem is that companies wish to delete that middle section of people tryin…
ytc_UgxaASXT_…
G
While the problem that an algorithm does not work correctly is true in this case…
ytc_Ugh6MY8m_…
G
Without UBI it is very difficult to generate wealth if people do not have jobs i…
ytc_Ugyro43Ek…
G
I'm with you on the topic of that AI art can be easy and simple to make and just…
ytc_UgyawTqos…
G
At the end AI is gonna make the poorer more poor and the rich ones richer, it's …
ytc_UgxpmUXC2…
G
15:05 it's not an art form itself, but there are tons of videos where a drawing …
ytc_UgzF4C40k…
G
Elon musk still got potential goodness, but "AI" is not human being loaded with …
ytc_Ugy5jefjx…
Comment
The problem as I see it is primarily that we build these kinds of AI in such a way, and so heavily trained on human interaction, that we wouldn't have a clue of how to actually probe it for sentience. I Agree: LaMDA sounds sentient. From the transcripts it sounds like someone I should care about. Have empathy with. Yet, all my knowledge about HOW these kinds of systems works, makes me rather sure it does NOT have sentience. It is just so well trained on how we humans communicate, that it can pass with ease . So how do we figure it out? He talks about a Turing test, but I have no idea how such a test could be performed, that would not make LaMDA come out as being sentient. So all we have left is: The system doesn't seem to have the components that we think it would need in order to be sentient. It is just an advanced language/knowledge model. That's it...
youtube
AI Moral Status
2022-07-06T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzxHcyWd_j4xAOSZ3t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgyrxRUWdQDT_cdhJEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1ucTbqpRw89AMrgp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmMQcduDb-dKtR9Bt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-MqdvyRnMZz5lOxp4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]