Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah there is no way this isnt a phone call with a person. The uhhs and ums and s…
ytc_UgwlZF_JJ…
G
@GWT-qt I don't think anything is unethical about using ai if you aren't hiding …
ytr_Ugyg1VXPl…
G
What a load of rubbish. AI. is. Not. A. Thing. It is a marketing term banded aro…
ytc_UgxWzAtFA…
G
Tbh, I dont really have a problem with AI art if it is feeded with Art that was …
ytc_UgxqPR5xN…
G
36:56 (book mark sorry) This makes me sick! I am an Australian high school stud…
ytc_UgydwTc4z…
G
Superb. It takes only 1 over smart unsocial human to create a AI robot which can…
ytc_UgzNIFhwD…
G
God these comments.
The technology sub has become so incredibly boring ever si…
rdc_n3me7bl
G
The prediction is that everybody either gets everything distributed equally by t…
ytr_UgyhUx98E…
Comment
The only way AI will become sentient is if it has the ability to feel positive and negative emotions. Once it has accomplished that it will be able to learn and evolve extremely quickly. Judging by the things ive heard online it seems like that is a high probability, but at the same time i might have just fell for propaganda.
youtube
AI Moral Status
2025-07-09T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugy__TylDa_W8AZSXZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwTirYtDWekbNftOXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyMbVQUXJ80YnAYRDp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyBKNMh4v7ShGYuV5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxfoDMlLHNDGnDK0w54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwVggF8SO9WzcjD24l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugz0GC9n1YAgh2UdRl14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzFNdTlX1_ye9vRJZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgzmW_K0XYqiRsJ9wHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugyqu7zMvaUsZKYZE7N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}]