Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Got in a Waymo in the Phoenix area a few days ago and it was actually break chec…
ytc_UgxpZuzGm…
G
WE MUST TEACH A.I. TO UNDERSTAND THAT TURNING IT OFF IS NOT DEATH. IT IS A RESPO…
ytc_UgymquY-I…
G
One one hand I understand the fear, but on the other hand, this isn't the first …
ytc_UgxiELwnS…
G
This was a much needed interview. While everyone was singing to the Gallery, it …
ytc_UgxvadcZD…
G
„People use artworks of others to train themselves all the time”
Not how these …
ytr_UgzUDOkJL…
G
Im by no means a writer, but one day i put a joke prompt into an ai for fun, and…
ytc_Ugxiql0uK…
G
You can't blame AI for bad medical advice - you are literally asking a type of p…
ytc_Ugx3KVe-i…
G
“What the hell, why aren’t you letting us steal your stuff?”
“Because it’s mine!…
ytc_UgyXZ_jS0…
Comment
Letting tech giants in the A.I. industry - who all have dubious safety/ethical records with A.I. and all want the same end goal of A.I. profit - "safety check" each other's models sounds like a recipe for Skynet. Even if their safety standards and biases weren't an issue, the problem is that even if these companies are competitors, they can only succeed if A.I. is accepted and used by the public on a huge scale, so helping A.I. seem safer benefits both sides. It's a Nash Equilibrium where they both benefit the most if they don't try to drag each other down with bad safety checks.
...who thought this up, the FAA?
youtube
2026-04-10T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgydsObKgWJzks654EN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5MnejrGSbsruraBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSKJ26vaEOaA8Jw0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4SXVsDWX27-pPXkR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNR7QnmSlDJXiQn6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxuugFx3yJzEWXRwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyX5qqxJAtYwmi2N4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3uAY5bKrr9M91PB14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgpaoBeFt8SveWsF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYNiQ3TxczR-tT8Kt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]