Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hegseth have you not watched the Terminator movies. Do you really want to let A…
ytc_UgwbcuH68…
G
Comme d'habitude ce truc de "supprimer des emplois" est un peu con, comme l'ordi…
ytc_Ugw6u9Y2C…
G
Me trying to write the ai a story:
The ai: *continues it like it was never finis…
ytc_Ugye84xKn…
G
I know AI & automation is inevitable. And they are going to take far more jobs t…
ytr_Ugz7kPHT0…
G
You're not making sense because sentient means it has a conscience. How do you k…
ytr_UgxC3MegP…
G
38:00 so just make a company/app that helps people get a complete education for …
ytc_Ugzu7M8ti…
G
AI isn’t learning like humans do and “ai artists” don’t create. AI is like a per…
ytc_Ugyc2Zh6L…
G
1:08:00 If we don't truly know what consciousness is then how could we program a…
ytc_UgwZGVnBZ…
Comment
Most importantly understand that just because you ask it a question and it doesn't even matter how you ask it. It may not give you the correct response or answer, saying that you are not asking a philosophical question, but one that has a definite answer, case in point, if you ask it to do a mathematical equation for you, with historical data, so there are no unknown variables, it's highly likely that it will give you the wrong answer, most of the AI tools out there are really bad at math and I'm talking about simple basic ads subtract multiply and divide, it's just not something they do well.
youtube
AI Moral Status
2025-06-16T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1npb_5YmuCFvAbyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_D1Zde0jU50bk8ZJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUoagKdxEAcQZZf314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw5MOf2hBkv1b-GbCl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyl289gQi13n4OFosR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVdPNVROhkmHwvavt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyuMIMqsC4vL8BRr54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaejAUjwYYSZd7nRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzaKQzJZgTU-gWfjXx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxbFDRErgTa1WIHMcR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}
]