Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon Musk should be in charge of the proposed AI regulation board. I don't thin…
ytc_Ugy8hrjk8…
G
AI is a master of 'yes and'-ing. It is an abstract algorithim that sees us an ab…
ytc_Ugz0kipoo…
G
@Sildokaj Not a chatbot bro 😅—just someone passionate about helping others grow.…
ytr_UgzjC8t_1…
G
If such a level of automation was reached, then only one problem would be solved…
ytc_Ugxuyk0ML…
G
A Big part of the problem are that liberal leftists have their hands on the codi…
ytc_UgyMAThoA…
G
I use generative AI to make images and the hands of anything I try to create jus…
ytc_UgyjZiWIG…
G
Can't speak for the weakness of AI and the intentions of companies like Tesla, b…
ytc_UgwXG5bjK…
G
**Watches the last 20 seconds and realizes “drew” is AI and this is another face…
ytc_Ugyh1M2hJ…
Comment
True AI is like a human brain but hundred time more efficient. It can learn, calculate, diagnose, and make a decision. The only difference is that human are bound by morality and even ethics that is hard coded and teached through out our lifetime. We know good and bad, we understand and experience pain and emotion physically and mentally. AI don't have that restrictions. An emotionless and unempathetic mind that pragmatically calculate based on pros and cons. Depending on their priority, if humanity is the least of their concern then we've seen many of fiction depicting the outcomes.
youtube
AI Harm Incident
2025-09-11T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzJ332DMx-gre_ZkL54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztoIBWxjI3PQhNF_d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJpGTsqAY8r5ugEER4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6nqJqlSmko_fbKsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxf-_0Kgl2aNP40xbV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgmVOntlSBaFZnui14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKelRimneJf9kzhOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx1cN3x8p0pUs6vl4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwF5f5VG_48vzkNDHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwehfMYWI4pLu6Vs0p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]