Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
also, ai have buit-in incentives to generate more code,
and generate long respon…
ytc_UgwSAZPbt…
G
Artists should need to opt in and should be compensated for their work being use…
ytc_Ugwl0MDTO…
G
Does Peter Theil consider Nature the Anti Christ, hence he and the broligarchs n…
ytc_UgxovwGeh…
G
You do not even know what learning means. A. I. is brainless, are you competing …
ytc_UgyCMOSHD…
G
A robot may not injure a human or, through inaction, allow a human to come to ha…
ytc_UgwZu4CZr…
G
AI in drones actually seems like a good idea, especially if it’s used as a safet…
rdc_k8x39sy
G
But that's the thing, Dr. Tyson. History tends to gloss over a lot of things. Th…
ytc_UgwVJPKbI…
G
NEAR END: ASK THE AI FOR IT'S CONSENT??? WTF? Are you high. He sounded intellige…
ytc_UgzylkhZE…
Comment
When I first heard that Tesla was going to produce self-driving car’s I thought it was the stupidest idea anybody could possibly come up with.💡 🤔💭
“Think about it!” Only a human being is capable of abstract thought, and it is impossible to program abstract thought into a car.🚗 🤔💭? But, Elon Musk, doesn’t believe in God! Therefore, he doesn’t believe in the limitations of man’s abilities compared to God’s. So, he immediately dismissed the possibility that he could be wrong! When we were discussing the introduction of self-driving technology back in October 2020, I said, there are things that you cannot possibly program into a car, I don’t care how smart you are. There are things that only a human being can understand about the world around them, where a mechanical device cannot. For example, if you see rocks rolling down a mountain side up ahead, and this tells you to stop before the rocks block the road or land on top of your car, self driving technology will simply keep going until you get killed. 🛣️ 🪨💥🚗 This is just one of many inevitably scenarios that are obvious to anyone with common sense.🤨👉🏻😮💨
youtube
AI Harm Incident
2025-10-19T11:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy5GFGbfzJQ9J-8ELx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwq5L9nsDDVgfwEJUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTDqTgpm6nAy5j48l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOX8e2U2MqXi_78qp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfbsU92b4NLkLDwht4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdeY5xZVajj-36qON4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyD0lHs-eiMvhSXK0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4Zo4glb5lVt3__Td4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwx2ckwyP8oU-lPF9N4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw174yCD5kPHbpRlEp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]