Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eventually, only the rich will survive, the rest of us will be eradicated. Betwe…
ytc_Ugy4KCtg6…
G
If artificial intelligence becomes capable of replacing people’s jobs, who will …
ytc_UgymBFMXV…
G
jeez... And no surprise - Garbage in, garbage out. AI has gleaned the info base…
ytc_UgyPTh8Sl…
G
AI art isn't real art. It was a program programmed to generate something. Its no…
ytc_UgwO8aPfN…
G
As a programmer and a person who has experience with this I made an AI and it wa…
ytc_UgyvDMDWD…
G
These conversations always leave me confused, it's all doom and gloom. AI isn't …
ytc_Ugwd0sK9s…
G
Also their Robot & other are State Aided to beat the NAIVE Democracies on this …
ytc_UgytUTIl_…
G
@thewannabecritic7490 ai makes me sick. The first time-lapse drawing in this …
ytr_UgyNziK6r…
Comment
If AI's primary objective is human well-being, then maybe it will shutdown nuclear and military threats? And produce more jobs than ever could be conceived? More food, medicine. I really dont understand this apocalyptic train of thought. Yeah, it makes for good Terminator or Age of Ultron movies. But if we write "Human well-being" into AI's primary and most important purpose and objective, and write it properly, then I can't really see it going rogue because that would be against its intrinsic purpose...
youtube
AI Governance
2025-07-20T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-PHat6WdA82I3fll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwShqmGArkBDKjanGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx84QSTjZxwS1vw0Vx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC72JUNJpzR6Qyjnx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz8CQcIgC8fk06zBcp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwuZTd8cZqTDmFEWVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzd029lCbCTnlNBIZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwoPyuiixH7Lb5Y9gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIW95_zGm9w17Rj9t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzp32OH0MlOLh7WlOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]