Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Much worse.
Tiktok might give their data to China... Who could then... Try to se…
ytr_Ugw5eg09E…
G
That would be the case for sure if God didn't intervene, but fortunately God let…
ytr_UgywFGOAx…
G
If AI ever figures out how to generate realistic images of human hands, we're in…
rdc_lq725gm
G
🎵 🎺 🪕 🎺 🥁 🎶
Hahaha, what a savage, Trump is just fantastic! He made AI Schumer …
ytc_Ugw6X12Xg…
G
smart AI is good for humanity. if you think, how to build rocket to reach anothe…
ytc_Ugze7zHWz…
G
ChatGPT is very biased and it pushes the woke agenda. It stops if you call it ou…
ytc_UgxwgMIiE…
G
Growing up, it was sort of distilled into us that automation would make our live…
ytc_Ugysl1Cb5…
G
@coltenh581 You’re saying there isn’t a limited number of moves you can make on …
ytr_UgxouTeg2…
Comment
There are public records available that make it easy to assess. Tesla is 13 times unsafer then a human driver. Not even close to equally safe and miles away from safer. Autonomous driving is only interesting when it works. All current data show, it doesn't. That is the point we are at. Because when it does not work, it is a desaster. We are currently facing a desaster and are expected to just accept it.
That should be the narrative, not some sort of future vision. Not some marginally possible utopia, which all data indicates is impossible with current technology. We should judge the tech solely on its current state and near future developments and all of that points to DESASTER.
Currently we are promised working tech 2 years, 5 years from now, but the current tech is a menace sharing the roads with you. This no way to introduce or develop tech. You don't go ahead and just factor in the inevitable death of dozens maybe hundreds, just for a slim chance to make this burning pile of a dogshit tech stack work at some point.
youtube
2026-02-18T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCTczrF5_qG1UUpDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYPogieASQSzrl2ph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygc3uEdJ-Q5Y5dhXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtyonzmXa5B4InHBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzD7E8oNfnyM88rAj14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzj_C2Xy6ktxcCW6AV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJMzlSrGOZBk96mfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSv52XjbclNkHhgyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3mrf9OIXdGjryaZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTjAGRcXrUJB3m_6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]