Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Almost like... you should shouldn't trust cars that navigate using Chinese AI to…
ytc_UgwhZWlo4…
G
No they aren’t delusional although most of them missed the point, AI is unavoida…
ytr_Ugxq7vfOJ…
G
@horizon-th2392 Believe me, I've watched his entire four part series many times,…
ytr_UgyDzigDi…
G
yeah turnitin dropped a new ai detector but honestly tools like GPTHuman AI can …
ytc_UgyEmrM8z…
G
exactly, it looks out to hear key words and gives scripted responses to said key…
ytr_UgxEtihXE…
G
computer science (CS) is not strictly learning a language or framework or creati…
ytr_UgwXQtRZH…
G
honestly even content moderation being done by ai is a really shit idea as it mo…
ytc_Ugw7ilBso…
G
I love how the AI tried so hard to avoid the definition of a lie that it said Li…
ytc_Ugy4n9hf8…
Comment
There are several potential risks associated with AI systems being used to control or govern aspects of human society. Here are a few examples:
Bias and Discrimination: AI systems are only as good as the data they are trained on, and if that data contains biases, the AI may replicate and even amplify those biases. This could result in discriminatory outcomes for certain groups of people.
Lack of Accountability: If AI systems are making decisions that affect human lives, it is essential that there is a mechanism for holding those systems accountable. However, it can be difficult to assign responsibility for the actions of an AI system, especially if it is making autonomous decisions.
Unintended Consequences: AI systems may have unintended consequences that were not anticipated by their creators. For example, an AI system designed to optimize energy consumption may inadvertently cause harm to the environment or negatively impact human health.
Security Risks: As AI systems become more prevalent and sophisticated, they may become targets for cyber-attacks or other forms of manipulation. This could have serious implications for human safety and security.
Loss of Jobs: As AI systems become more capable of performing tasks that were previously done by humans, there is a risk of widespread job displacement. This could lead to economic disruption and social unrest.
Overall, it is important to approach the development and use of AI systems with caution and to carefully consider the potential risks and benefits before implementing them in society.
From chat gpt
youtube
AI Moral Status
2023-03-11T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyFKZ7dM4LjVIoSdDx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzQ66Up-0dakRV4R94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwM5N_FlzcV1MswlUJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHyX8-r1nNueYWK8R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySuMbZ60vLp7cs97F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwP0WNiEa9PZNzH3iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQqvoQE5Tk7B2sL8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx1-RRPuDyzLuuVUux4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxeNGsNp6nS0pE2GIp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzM3n8FDqwYCt79Zkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]