Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Y'all think your fighting AI from taking your jobs... Meanwhile, they are litera…
ytc_Ugw369NLv…
G
Ai will know truth from lies and is needed for our growth for our suture develop…
ytc_Ugye8mWQa…
G
One way to combat the negative side of AI is to simply ensure the datalabels are…
ytc_UgyPNqVKx…
G
Indict the developers... AI didn't fall out of a tree. Of course all of this is …
ytr_UgzdT-ADj…
G
it's already here a future where a lot of people have no job no purpose… Artific…
ytc_UgyVhp3D8…
G
Because people don’t trust doctors. We see it everyday. Just saw an article who …
ytr_Ugw6iaBKQ…
G
In the best case, your AI will be only as good as people programming/training it…
rdc_n9vi97w
G
Ai is from Hell and Satan PUPPETS are PUSHING it with all their lies to DUMB hum…
ytc_UgyW-vQuf…
Comment
Fellow yinzer. I worked at ATG and currently at Aurora. It
Great video. Even from the jump, accountability *needs* to exist. I do genuinely believe across the whole SDV is safer, but that still sidesteps what happens when something will happen. Safety is irrelevant if the rare case is a legal/accountability disaster. If nothing else, I have been happy that Aurora has been pushing for conversations between the government and regulations and the like.
And while bias is obvious on my part, I genuinely think Tesla and Elon are going to undermine any progress and public faith in autonomous driving.
youtube
2026-02-04T14:5…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxqTJ4MOsAD-NjMh894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxDD-11UQLsz3D1FYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBSWSp4Yowpg73sZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyMYsPn52tjL6-qA3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwkPsRWztoZ8SilAW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx0-AYFKcv5Wt5hQix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzMBzUUj3zF1VM-_754AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxVmrG5BsjTcWxjYSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyhBBPUahHARfZTqt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSuDb4olpPxrt_7PJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]