Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@WALDENSOFTWARE Yeah, pretty much music, coding, writing, any white collar work …
ytr_Ugw8SPt2q…
G
Thank you for sharing your thoughts! The conversation around AI, like the one be…
ytr_Ugx_3mVFu…
G
I just started this video and it has already thrown me off when the AI said "um"…
ytc_Ugxy04RgI…
G
Lmao, they actually told people AI could replace 85% of devs?! 😂
It can’t even r…
ytc_UgxTnu5lQ…
G
The terrifying part is that everyone that saw this could see it happen. Don't fo…
ytc_Ugz2sAuM7…
G
> If a programmer with AI is 3x as efficient as before, he can replace a lot …
rdc_n7k6eti
G
smudge and paint bucket are tools that are used in digital to make it easier and…
ytc_UgzeYCZsW…
G
The one word rule makes it more scary than it is since there's no real context s…
ytc_Ugy02nV3F…
Comment
When I asked Chat GPT if it could ever eliminate mankind, this is what it said,
AI is designed to follow specific rules and algorithms, and it can only make decisions based on the data and instructions it has been given. AI lacks emotions and desires, and it does not have the capacity to act independently of its programming. Therefore, an AI would not have any inherent motivation to harm or eliminate humans. However, it is possible that an AI could be programmed with harmful intentions, or it could be manipulated by humans with malicious intent. Additionally, there is the risk that an AI system could malfunction or operate unpredictably, leading to unintended consequences. To mitigate these risks, it is crucial that AI development is guided by ethical principles and regulations, and that AI systems are regularly monitored and audited for safety and security.
youtube
AI Governance
2023-04-18T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRdzZ0LAJX_UoYm3x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzoapS7QooSzfhhdsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxkMzgEtQAIQeINzZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1luipc7srNbxEkX14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwrmiDXDneUUyHq2MV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzmkPkP3va4KExrVCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzqXwGYOD7pWU1GX0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGfN1K5osyRKqiH5F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz1CoUb7hNYHaJ_3s54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOCCRs8b09y4knq7d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]