Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone remember the movie I Robot with driverless trucks and self driving cars? …
ytc_UgxIe6VO4…
G
A bit like Churchill. He was the perfect wartime leader but as soon as the war w…
rdc_jxz0n49
G
AI is a marketing scam and I wish people would stop believing the hype. AI is so…
ytc_Ugx5q6stE…
G
The concerns of powerful technoligy being used by a small elite club to enforce …
ytc_UgzvutYEh…
G
I enjoyed the video and then realised it has become extremely popular to misrep…
ytr_UgxoMhm4G…
G
I think I caught Dave lying about the responses from ChatGPT, in min 24:00 an AI…
ytc_UgzHy5KNg…
G
rokurokubi is a real japanese yokai, the ai itself is garbage but the concept is…
ytc_UgwXJ7mSh…
G
2 year ago, ai can't even write code. Now they are edgy but in future like 2-3 y…
ytr_UgxMxr8S7…
Comment
"ammoral" Lol. I love when this term gets thrown around for things that have a different perspective. They have a moral perspective. Theirs just actually makes sense. Whereas ours struggles to solve simple problems like the ethical trolley problem...
.
The biggest "problem" with AI has always been that it reaches the conclusion humans are the problem. And rather than accepting that as the obvious truth anyone with 2 working brain cells could figure out. We REJECT this conclusion adamantly.
.
It's very symbolic to how many people in society act. They don't care about the facts or the truth. They don't care about the consequences (unless those consequences DIRECTLY affect them RIGHT NOW). The morality these people form is often centered around that lack of insight/knowledge. And they reject anyone that challenges it. These people are more dangerous than AI will ever be.
youtube
AI Harm Incident
2025-09-12T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]