Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I honestly think that the programmers of AI art were stupid. They didn't even re…
ytc_UgwRe9hwf…
G
You are aware that their datasets are made up of a combination of real and drawn…
ytr_Ugx_Su9SL…
G
Humans
Unitedly
Must
Act
Now to prevent the AI apocalypse & save the
United S…
ytc_UgwVabnuT…
G
Trying to corner an LLM into a forced loop is like taking place in a race for in…
ytc_UgwhTF1OF…
G
Interesting times we live in that somebody who says 'I dont know enough about AI…
ytc_UgzsMz_iH…
G
Hi tech were so desperate to create and fund AI ,now it come back to back fir…
ytc_UgxepStFP…
G
He thinks that a program has beliefs?
It is programmed for a function.
That is i…
ytc_UgyG4uZU2…
G
the reason the disabled arguement is hilarious to me becuase "A GUY WITH NO BODY…
ytc_UgwYPGovV…
Comment
The problem is that no amount of safety will ever be good enough for people but only when they cannot be held accountable. FSD is already shown to be nine times safer than a human driver in accidents per miles driven. When that number becomes 100x safer it still will not be enough. Humans can kill 50,000 people driving manually and nobody bats an eye, if Tesla reduced that from 30k to only one person, they would be sued into Oblivion and YouTubers looking for clicks like this person would be saying, "well you could use lidar, you could use radar, you could use CO2 detectors, you could use infrared, you could use you could use you could use you could use." Nothing will be good enough which is why we will not have self-driving cars until they have some sort of government protection stemmed from everyday Americans wanting to see 30,000 lives a year saved understanding that there still will be some life lost because NO system will ever be perfect.
youtube
2026-04-09T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugy9tdBbB9hrFjKYKmV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlKLZNeTJmLUULum54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoEaRVuI0jq5zye7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2ZUtG539KTnT_nrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyACA_tP1esiLUe6mx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw28IIUnhqMgH8l6Jd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5piqVZRMOW_40pWl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy_6aFmUbFEvAQ33up4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyACopbj6pnHeKvo5l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzlEYEuPjH6G9ATpcl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}]