Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is superhumanly well read, not very smart. I heard it is on about the same le…
ytr_Ugybsp8t8…
G
Its sad that deepfakes are used this way. I remember when deepfakes were made of…
ytc_UgzN8Yw8L…
G
So how do you make people ready for totalcontrol implemented by humans in A! and…
ytc_Ugx-w5Vg-…
G
Don't ever trust what the AI says. The AI is designed to agree with you, no matt…
ytc_Ugw0xAmH4…
G
Imma say this: as an artist I also get genuinely angry and furious when the topi…
ytc_Ugz5vetP4…
G
I feel like a lot of these comments are AI. His closing statement should be the …
ytc_Ugwmg8i1d…
G
I feel like a big problem with the AI good/bad argument is that lots of people l…
ytc_Ugyc483l0…
G
The complete lack of humanity exhibited by these tech people is unfathomable. Be…
ytc_UgwVj8b0f…
Comment
Self driving cars are not a reliable thing in the current road network. USA regulations are too weak! How can they let Tesla release their cars as "self driving" if they even have no radar system! Here in Europe regulations are much stricter and they can't sell the "self driving" version... furthermore... many people LIKE driving!
Last thought... if they really want to sell this stuff they should at least provide a kind of "black box" recording everything... because you can't interview a robot at the police department 😅
youtube
2026-02-11T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwmTHJDi5WJpNB2UeJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwyd1wgLmE-qTXwPhh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwJZOb7MGXy5KKiM7Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfZn_D3J8RjoaYODR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKjVUujYTBjkvwWNN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVd5cI3LaJyKG2VZt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgybBUCLCxod2EGroRh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySZI0PB5THkLrwqz14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyGYhTt82XsykLT9V14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWQyTfDDm1WGJODcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"})