Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that one robot: “girl we aren’t🙄” and then does the side eye💀
(NO CAUSE I FORGO…
ytc_Ugz-uiIJ2…
G
Idk. Bad people are gonna do bad stuff. AI isnt bad. Bad people are bad. I don't…
ytc_Ugx7QG57S…
G
If the majority of us are to lose our jobs/ incomes, who then will be the new co…
ytc_Ugx-pc7Wk…
G
I think you are so so very wrong about this
The vast majority of legal work is …
rdc_fcrztz1
G
I guess satan wanted to make human-oids of his own Repent and be baptized the en…
ytc_Ugzx08Jta…
G
This might sound crazy but the reality is the earlier models for LLM wouldn't do…
ytr_Ugz6kvTez…
G
While AI can help with the grunt work, what it cannot do is create novel things.…
ytc_UgwDO5SEu…
G
Not interested. This discussion was divided from the start. There is a certain c…
ytc_UgwLt-50j…
Comment
0:26 Yes, I first thought of this “stopping the other” logic about four years ago. But ASI cannot be 100% right about its conclusions. There are likely to be guesses calculated in. The artificial intelligence problems are too numerous and too fraught to ever be resolved, and geopolitical and profit dynamics will guide all outcomes. Yes, this means there’s no hope to save humanity. Many intellectuals have come to this grievous conclusion, but their outward pronouncements are measured (& deceptive), governed more by epistemic quietism than candor.
youtube
AI Moral Status
2025-08-05T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz1WRDw2vm7PGpl-fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzMLuTBbi6tWWNl9al4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzAPFpRNO_85va6l394AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz1gveJEa1JhVJ7O5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzsQTKGfqoWWbRL1tV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwPMOLzZXlSMajF_Od4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugy2WhvU3eZbDCUHbtN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwQKeXblI8BQm6iPKx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx-ALnYBj5KnrB-6Dl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwsRBWGtegtRzBT88t4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}]