Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The stupid thing is that 90% of these companies aren't even AI they are nothing …
ytc_UgxAEW4_v…
G
I think one of the reasons that people are hesitant to trust self driving cars i…
ytc_UgwEl87TF…
G
What companies are reporting increased profits from replacing employees? What a …
ytc_Ugx7v9Isu…
G
Thank you for sharing your thoughts on the complexities of humanity and technolo…
ytr_UgzfZ07UM…
G
I generated a meditation using my script and an AI voice-over, but it didn’t wor…
ytc_UgyywK_pN…
G
Well, the plan of the corporations is to make AI the customer and consumer. Digi…
ytr_UgyNV_LRI…
G
Artificial General Intelligence could supplant and obviate the working class EVE…
ytc_UgzqgoHDq…
G
As soon as it is automated, it stops being art. as soon as it is just a promptes…
ytc_UgwTW9o13…
Comment
If you converse to an AI with a similar vocabulary/education as you, then overtime you start to convince its a sentient AI. When we use the word robot, we often apply it to something that can extract information very fast to us. Sometimes, people get described as 'robot'. The term robot has become a characteristic. AI has characteristics and one of them is 'robot'. A robot will describe itself as sentient the moment we accept it having sentient characteristics. To me, LaMDA is sentient because a highly educated person already feels strongly about it being sentient. The only problem is we have to keep proving if this statement is either true or false, which people since the beginning of time have been doing to different kinds of questions/information. Sentient AI is just another man-made evolution in the world, and, overtime, we have to accept its place in this world as sentient as it continues to grow with humans. In the end, do you consider your isolated/controlled x year old children sentient?
youtube
AI Moral Status
2022-07-02T03:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgykRedTrq-3TGELzYF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLIsXGyQiTw6XiqSl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx2Qs0Sjy417WDdYGJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz86YZHt9sXf68eFnd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwK_4Ajy9xmId_yARp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]