Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Always love your content but the AI chats are VERY contextual via the prompts, u…
ytc_UgzFRd9eZ…
G
"lets automate everything except for the higher ups making all the dough"
"oh sh…
ytc_Ugw69yiVg…
G
I don’t think this gets to the biggest issue which is lack of continual learning…
ytc_UgxLFOMJL…
G
After Sophia said I love you all goodbye what did the mail robot say ignorance? …
ytc_UgylrXchl…
G
I can't wait for AI to scroll social media for me so I can go and do more produc…
ytc_Ugy0osgr5…
G
That second one is exactly why automated emergency takeovers are just so much be…
ytc_UgwEOCbDk…
G
All the AI BS is simply a new F-ing toy, People will get board with it as all ch…
ytc_UgxewxEIM…
G
Seems like it makes marketing (among other things) more efficient and thus surve…
ytc_UgxMZdnP4…
Comment
Agreed. I'm curious how the airline industry got to where it is today. I have no idea. Maybe it could act as a roadmap? I can personally see a future where everyone is safer because all of these autonomous systems are interacting with each other and layers and layers of redundancy. At what point did we trust them with hundreds of lives tens of thousands of times per day? I suspect when a) their training and field testing was much further along than Tesla (who would more than likely rush to market for money) and B) when those three systems were fully trained up like you say. Autonomous systems are going to be under scrutiny until they prove themselves safe and don't commit these really simple stupid screwups that humans in control would almost certainly not make. It seems to me that rushing them to be fully in control risks the entire future of any kind of autonomous driving. I suppose we still don't fully trust flight systems because we still have two highly trained pilots onboard. Thoughts from your experience?
youtube
AI Harm Incident
2022-09-04T17:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugyvgjaq2iSqCaKCETl4AaABAg.9fZyR6xVMg39f_O7OpQGwT","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxhIOY3ZdRCMdCBt5B4AaABAg.9fZoXAbtGwL9fdz5UNAZQh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw5sX7pKTcOSijZYTV4AaABAg.9fZc3jNxmWt9fZsX7oi8DW","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxCHY3MvQMVl9-7bRp4AaABAg.9fZaH2n1qlw9fh6mTqQrCn","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgwPc8BLw_4poaCMxgR4AaABAg.9fZG-rO-kVx9ffu2OLB8YI","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzdyq7FCgl-kjhMkAJ4AaABAg.9fZ19_-cMyA9fZsS827fHr","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx7eNenlBoUqnrEtml4AaABAg.9fYkwJ7Vvi49fZQk5147pY","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzoRtGKCUlS1SQxhr14AaABAg.9fYfTsF59BK9fZJ5-Irlr_","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxeMiGEGA4wlbEQZFR4AaABAg.9fY_lREKAJ09fa0ohft2Rj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxYVPjp3WMdRJzYO6x4AaABAg.9fYUjXVhSyf9fa3bhVc4lw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]