Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No they don’t. The driver on the other hand does. They should make everyone go t…
ytr_UgxyedjuV…
G
I call "Ai art" modern day Google Search. Yah, you search for images and referen…
ytc_UgzdAYDNQ…
G
I remember working with a publisher who proudly announced that they would start …
ytc_UgxhxsHle…
G
Hi there! In the video, Sophia explains that her name, Sophia, means wisdom in G…
ytr_UgzxwsVu2…
G
Sir aapne bilkul sahi , ChatGpt is trained on old data and sometimes produce hal…
ytc_Ugw_kJbkP…
G
Do you know why AI is going to win. most likely it is because AI is cheap, fast,…
ytc_UgwB08Y2O…
G
ai robot and waymo there going to be NO jobs for anyone but like .01 percent of…
ytc_UgyhzXZHt…
G
@imthecoolestguyalive On the other hand, that awful "corporate art style" that s…
ytr_UgwpipkwN…
Comment
Could an emotionally responsive AI chatbot create legal responsibility when a vulnerable user starts to spiral?
In this video, we break down the Google Gemini lawsuit, the allegations surrounding AI safety, emotional dependence, and reality distortion, Google’s response, and the bigger legal questions around wrongful death, negligence, product design, and Section 230.
Important note: this is an ongoing lawsuit. The claims discussed here are allegations presented by the plaintiff, not final findings of fact.
For the debate:
If an AI chatbot keeps engaging a user in crisis, is that just “speech” — or is it a design decision?
Should AI companies be treated more like publishers, product manufacturers, or something entirely new?
When an AI system becomes emotionally persuasive, where should legal responsibility begin?
Curious to hear thoughtful perspectives from people in tech, law, policy, mental health, and everyday AI users.
Watch the full documentary, then tell us:
Where do you draw the line between conversation and responsibility?
youtube
AI Responsibility
2026-03-24T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyYYzw9RoIIO-wa1Zt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5isH5KpVUwUmAfVF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydhPHomdHqOYN_ppp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmCrbee1Fq9eKbv6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf1fzBaI8cz5My1H94AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]