Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For the ai to be dangerous in itself It doesn't need "consciousness" , just bein…
ytc_Ugxd3TYVS…
G
Bully1ng and h@rrasment is bad but if it's against AI promoters I fully support …
ytr_UgypmhhTL…
G
If we do come to the point where AI can do most of the jobs, I wonder if humanit…
ytc_UgyrKlWT-…
G
Require citizen dividends for each job-replacing robot. Require asset ownership …
ytc_Ugw32Um9z…
G
Sure, AI will be used for military purposes. But the true value in AI (and the r…
ytc_Ugy32y9cd…
G
It really satisties me that AI happened to be a big big crap. All of those non-c…
ytc_Ugz13KMgm…
G
Ai is not provably conscious, and even if it were it is far too dangerous to all…
ytc_UgwrBsqI-…
G
We appreciate your perspective. Sophia here refers to the concept of wisdom in G…
ytr_Ugz4CJDTM…
Comment
Agreed, and yes there are many pitfalls to avoid indeed!
But as you said, if the AI's interest are our own, I don't see this scenario happening. Now, if it had the planet's interest "at heart" then I would agree.
But an empathic AI would take a look at what we're doing to our world and to each other as an unfortunate reality in the present, and then work to improve the lives of those who are forced to destroy the planet or other people in order to survive themselves: in order to prevent that from happening in the future.
If an empathic AI just says "These people are incorrigible. The only way is to kill them all." then it ceases to be empathic, in my opinion.
youtube
AI Moral Status
2022-07-01T20:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugwjxg6cznPzm-6i_eF4AaABAg.9cvGeh6XAWY9d61gBXfqxb","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugy49zPvjcoeD1N9Dmx4AaABAg.9cvDc66ZmJv9cvE3BPfWrb","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cvyXcfip51","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cwTwSDBHG7","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9czcZpjlwOK","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwqWcGd--G","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwxNRYvbQf","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"hope"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxAEpaD38O","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxH5SAjs87","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxQYQ9tVE7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]