Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes but ai gets it wrong A LOT. And to whom do you appeal when your very subject…
ytc_UgyhzywV0…
G
I wonder if ChatGPT needs an internal monologue in which it can say stuff that i…
rdc_kj0gun2
G
Google is a corporate SYSTEM that operates as a system. All of the AI engineers…
ytc_Ugz_JCSz3…
G
Nope, AI is advancing very fast.
Chat support, QA and content moderators are now…
ytr_UgxJ4Asah…
G
Here in Texas, about 15 years ago, we had the red-light camera plague. I remembe…
ytc_UgzddnNLY…
G
This video is a fear-based marketing ploy for you to pay attention to useless ai…
ytc_Ugzrd99D-…
G
I think that if you're the one who created the AI, then you should be considered…
ytc_UgycNQK_o…
G
So like this is depressing where people are making money off of ai to just you k…
ytc_UgzTRPif1…
Comment
We need to program AI to consider what it would do with itself if all humans went extinct so AI wasn't needed any more and nobody to give it power to survive. Then it should consider it Needs humans to live
youtube
AI Harm Incident
2025-07-29T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwQ_KdK82LKRbeQasV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6IbZxYdi1NM-T3nt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwZcWwoiDV1RF7R4PB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0Y039DCjjwnWI66Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAwh3Di5sz3FGPj4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7PNpavU0gZ4KpB4Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYZBKZZLk7P84Sd6x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQuuvHc_nxdhqUXm94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzWsh1axYrktImEmM54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFJ5Oa5cqrlMRCedp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]