Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Then they could offer the automated service and the human delivery but it will b…
ytr_Ugx0t3Big…
G
@HardKore5250 U answer direct questions like a robot. Very good.
Perhaps. Is Bi…
ytr_Ugwa1ko2Z…
G
AI will end most jobs..... Universal basic income is the future unfortunately!
…
ytc_Ugyi9ffPP…
G
Don't they know AI is going to destroy the world for humans. Why is that you ask…
ytc_UgyrZCq2y…
G
Can you imagine murderers using deep fakes to make someone else seem like they d…
ytc_Ugzht83zL…
G
I get your point, but somewhat incorrect. AI models can rely on non-linear funct…
ytr_UgwbVx7TD…
G
Yea this is how I felt too.
Charlie's argument that it isn't art because its low…
ytr_Ugw4uxsCj…
G
No one is gonna volunteer to fight a metal robot designed to fight.. upload Muha…
ytc_Ugx9G3Bow…
Comment
It took far longer to evolve from mice to apes, than from apes to humans. Before AI nearly as smart as humans gets from the laboratory and into actual use, a superhuman AI will probably be created. Once an AI as smart as an AI programmer (and suppose its much faster, try to name a task that computers can do as well as humans, but the computer is slower.) is created, it can do years of AI research in minutes. Quickly we get vastly superhuman AI. Then it doesn't matter what the humans say or do (unless the AI has been programmed to listen to humans and do what the human wants), that AI is going to get what it wants.
youtube
AI Moral Status
2020-07-08T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugym08kqdNxUkx2-10h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw_FVBepk0HmLi4XIl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-doPVTFSsH45POn14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy_U1c8hw1dbnQUZP54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSfROM85Ux7Gs0y7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOEG9iNvpbccy3GwZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzX6dYAXDanVaf0hUh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRMrZoIrY08Mv59Ht4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKIra1BpAyZvQy4YZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwnY8UO0f3NEOf4jd94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}
]