Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most ai safety experts beleive it's most likely that and ai model will kill all …
ytr_Ugx5YCRGC…
G
The 9/10 scaffolding part tracks, AICarma’s AI monitoring nails summaries too, b…
ytc_UgxSnFggI…
G
I’m so relieved you are not using AI. You could share this video with your frie…
ytr_UgyJZCjEp…
G
Re generated images, it all sounds too easy to flag them as such, but any modern…
ytc_Ugxf_TXzr…
G
@fishpreferred3322 Your reply is a philosophical rebuttal of the notion that AI …
ytr_UgwPRo2Ks…
G
3:14 it’s mostly recognition of marginalized peoples. Seems like AI has more emp…
ytc_Ugy6i5C9o…
G
Oh great, we'll get those incessant menus that lead nowhere but back to a human.…
ytc_UgxQc0f_f…
G
It not about only learning about AI and is more about apply AI to help us develo…
ytc_UgxILt308…
Comment
Money money money... Tesla wants to make the most money possible. OK, let's grant that - if Tesla releases self-driving software that crashes a bunch, everyone stops buying it and Tesla gets inundated with lawsuits and the company either goes bankrupt or limps along, a shell of its former self. So if Tesla wants to make the most money, by FAR, what should it do? Release a self-driving car that is safer than humans. The argument they're trying to save a few bucks in hardware to try to make more money is preposterously stupid. Working, safe, reliable self-driving is the difference between being the most profitable car company ever and risking bankruptcy.
youtube
AI Harm Incident
2022-09-04T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCxYZo9m_LHfHEkp14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKUOLxpSSl60xSwC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD6nQYTbMDdVNZHn94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3SrPcKbufoo9yxx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJBR2a-RPrCK4RpP14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyt-Fa7mbWvHZ2gW5R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlZO5_AoMLXPOG5Qp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgysN3F3pzx6MrDVMVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxu19QGgTea-JCpAHB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx51pLGEND4ettC6SR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]