Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The real problem is not random Youtuber's saying "we're fucked". The real proble…
ytr_UgwJ-QRI1…
G
People.. AI CANT DO NOTHING MORE THEN PEOPLE ARE LEARNING IT! 🤦🏼♂️ Who created …
ytc_UgylH4-iM…
G
Here is a stupid question: How would an automated driver know how to react in li…
ytc_Ugy8W1-07…
G
I’m a bit surprised by how often the Google AI is wrong on various topics.…
ytc_UgzjjXIsE…
G
I use AI today as part of coding and im pretty new to it.
IT certainly has its …
ytc_UgxzPpuYj…
G
The rise of agentic AI indeed poses risks, including significant job losses acro…
ytr_Ugwhiyj6d…
G
The point about why LLMs hallucinate and don't say "I don't know" at 16:50 is a …
ytc_UgymcRj0D…
G
AI is not that scary as it is made out to be. While AI can do herculean work, th…
ytc_UgzwFmASs…
Comment
It funny that everyone things that AI it self is dangerous, but it is not, we humans are. If AI will ever kill somebody it will be human mistake. AI doesnt program itself, we does and if we not carefull enough and drop safety measures, beacouse to expensive or slows AI, then yes it might hurt us, but it will be our mistake. Just think about it, it wouldnt be the first time, There is nuclear energy, we can use it too generate electricity, but no, the first thing we done with it was to creater a weapon called atomic bomb to kill thousand and later millions of people at once. And Atomic bomb didnt create itself, we did. AI didnt create itslef, we did, if it does something wrong it is our responsibility.
youtube
AI Harm Incident
2025-09-27T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwx3cvT-A90XGZeZ5V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsprldwftZC72r89d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXjspqqtzNmtgU4bF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtPGT9hNWyt4eAmD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyutFvAqf50EfF72Vh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxHkeF9AaI5ORTZyt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQY9NWavWfZMNx5Al4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoYhbQOYmCc-j6a0d4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9AJ-L-w9TfVNx54d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEDNtSyH5iSzDdDs14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]