Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
studying for 16 years doesn't mean that they'll do a good job. you'd be surprise…
ytr_UgwjZh0Yc…
G
109 years into the future, the world has been destroyed by a sentient AI masterc…
ytc_UgyAyDiF-…
G
2025 - Sure, why not? "Agentic Misalignment" Thankfully, we are lead by the stea…
ytc_UgzU0wVW2…
G
@ItsKimJavaGotcha. The video focused on that one drive, so I assumed it was the…
ytr_UgxXozgUW…
G
Pretty much nothing. This isn't the first vaccine for coronavirus. The rapid p…
rdc_fjzjgko
G
@No-longer1 Yeah, actual Artificial Intelligence is still stuff of science ficti…
ytr_UgxjaoXEj…
G
@samirdaric2493so there are elements other than the money. He clearly wasn't cl…
ytr_Ugx9PmKHA…
G
Well in a way they are. They trained on lots of stories and in those stories the…
rdc_mythpq8
Comment
I have an idea. What if we have specific roads for self driving cars. And those self driving cars are put on rails to keep them from doing stupid things. And they're all attached in a line so that they can't rear end eachother. Then, people just hop on them and ride these cars around the country, and they all can all be easily driven by on big engine, or maybe they can just be pushed along by magnetic accelerators in the tracks! Then all the driver has to do is monitor the speed of this "train" of cars. It's genius!
youtube
AI Harm Incident
2026-04-01T17:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJTG-dLxfe-9z_3Yl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmeG4kWaiWcaUyaMR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzu3KLYXixGgEsD04d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-Nt3HhgMpgkIc5Vt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXXr2K2tEaLghkW654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzfy1amW93rRfMkHV14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyU-zEhFpAD1W5nc0F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqtL1v4eEEjkS9Li14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugypnnd3MUt9QuCEcBZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyYkbAfvSao_OV-Tmt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]