Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the Fairness Doctrine is restored and applied to all AI usage would this esta…
ytc_UgwBsjB08…
G
A robot that thinks it wise, and knows what's best for its planet. Sounds like …
ytc_Ugz0USAKl…
G
Automated customer service are nightmares for customers and just a cheap alterna…
ytc_Ugwd3dO88…
G
i love how people like to talk about AI playing Chess and Go... i have yet to lo…
ytc_Ugw18weYW…
G
I've always said the search for AI is the search for I
We don't know how to quan…
ytc_UgxzrzJMO…
G
If a "robot" was truly able to answer any question one could give it, then the r…
ytc_UgzsYICzk…
G
We have much more capabilities than modern day slavery, 40+ hour work weeks, bar…
ytc_UgyRUkP0S…
G
Humanity is gonna self destruct long before AI does any harm.We're well on the w…
ytc_Ugy99_wHK…
Comment
I think that the only problem with an AI having personhood is the human interacting with it making the assumption that the AI is correct in all things. Even the most intelligent humans throughout all of human history have made (and are making) mistakes and have biases. If an AI is programed to know that it too has limitations and biases and therefor will make mistakes, especially if it acknowledges this to the humans that it interacts with. The danger of 'colonialism' is that the person from another culture might not be informed enough to understand that the biases in the AI exist. So. statements from the AI should include an acknowledgement that it is speaking with specific biases.
youtube
AI Moral Status
2022-10-11T22:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugxf-6_ZnlgnOLEWuAJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzueMdDU2Ol6Fl9fhp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzTa329v785ywVcRoh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyhE5_yB3IEvdGB5Dx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0hNrvYIVX-p2rb2t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]