Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bernie I wouldn't worry about AI and robotics the wealthy are going to lose bill…
ytc_Ugx3Qckwf…
G
Let's talk about accountability for ai. Can ai be held liable for crimes committ…
ytc_Ugy09ItQR…
G
I do a better job exposing crime & internet crime than they do or current AI doe…
ytc_Ugy7w7EBd…
G
@aerindinescarro47 Well, I don't particularly hate it. But I still bring it up j…
ytr_UgwynO-a0…
G
At the very least we need transparency. We need to know when rich tech bros trai…
ytc_UgzukaGNM…
G
ChatGPT can already do this you just need to set up a prompt where it wants to g…
rdc_n0f2t6i
G
>Italian media reported that local police arrested a 57-year-old Czech man, w…
rdc_looxc0k
G
I no longer agree with using it as reference, ai does not understand what goes w…
ytr_UgxfvVJW5…
Comment
Given that humanity can't even agree on what consciousness is nor come up with some scientific test to determine if something is conscious, then asking whether AI is conscious is irrelevant. Is it just an LLM predicting the next word in a string or is it actually conscious, who cares? What actually matters is whether an "entity" can form an idea and act on it in the real world. A bear in the forest has ideas and acts on them, does it matter if we say it is conscious or not? No. It matters that it has agency and affects the world.
youtube
AI Moral Status
2026-02-04T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwsSkhwbUVk09DdQ1t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyYrOMKjZY4nBoSNeN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5twSfulAJU8UxCul4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1sB2aXC5unHgKVLl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypItr3hyCkpagXJYR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwa9wq2MWTJ3iQ6Q254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwcFrwQqdYd_orva-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzccaczMmji6R5-ZGZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzmu3XGSa97p0IjwUB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwwph1DvnjKP-LjZhF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]