Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why don't we use AI to solve the energy crisis instead of build more models and …
ytc_UgwBXerzY…
G
Bro, the deep ai i used is just crap, i tried to use a question but pops a PRO p…
ytc_UgwqJYsTZ…
G
It look more like an old nanny kangaroo than a functional robot...
One kick fro…
ytc_UgyrC1Nys…
G
To think that AI has the potential of developing self awareness and emotions is …
ytc_UgwbVUQh9…
G
Para que necesitamos robots que parezcan gente? Ya tenemos gente de verdad, a me…
ytc_Ugz1zoPmH…
G
I'm a chatgpt noob but upgraded to plus, then I had it make me a weekly checklis…
ytc_Ugz3Eiz8L…
G
Latter one, but is is not because it is "smart" enough, it is not smart at all s…
rdc_koq2h6s
G
So, hear me out, the argument against AI is kinda like the argument against perf…
ytc_UgxWVoCZs…
Comment
i feel like the pros dont actually ever outweight the cons because then you have to say- why would i want a human teaching my child when humans can be wrong. My Chef or Server can get a measurement wrong in the recipe. My Doctor can give the wrong medication. Then we're left with the solution that if a human does something wrong then we should just remove the human element. If were going to say reducing the death toll caused by human error by replacing the human element with robots then why dont we just say humans are dangerous and shouldnt be allowed to do anything instead of creating rigid structures to reduce harm. Its not even like removing the human element is EASIER as evidenced by the dissection in this video. Hire more female drivers dont remove the concept of drivers entirely in the vague hope that eventually a robot will never run someone over again
youtube
2026-02-11T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwmTHJDi5WJpNB2UeJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwyd1wgLmE-qTXwPhh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwJZOb7MGXy5KKiM7Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfZn_D3J8RjoaYODR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKjVUujYTBjkvwWNN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVd5cI3LaJyKG2VZt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgybBUCLCxod2EGroRh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySZI0PB5THkLrwqz14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyGYhTt82XsykLT9V14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWQyTfDDm1WGJODcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"})