Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Being a computer scientist who learned about these AI/ML algorithms in school, p…
ytc_UgzscWfQR…
G
The training data for machine-learning facial recognition is known to be based p…
ytc_UgzZF_NjA…
G
i also with the points that you made i also hate how there are ai books that are…
ytc_Ugzu-9tag…
G
Before watching: We will know AI is conscious when it does something in spite of…
ytc_UgxIjDv5b…
G
I'm a solo AI researcher and I can confidently tell you the problem is not with …
ytc_UgwKbQ4n5…
G
Nobody reads EULAs anymore.. Most artists don't know /don't care to read what th…
ytc_UgwNJG4_L…
G
@kidz4p509
eh, fair enough. but you've got to admit generative ai has caused a …
ytr_Ugx6IMuCj…
G
This means, we're facing an endless worldwide arms race, so all people need to h…
ytc_Ugw49E3d1…
Comment
From a legal standpoint, it seems evident to me – and I say this as a lawyer with 35 years of experience in civil liability cases – that the company OpenAI can be held liable for the damages caused by its product. Especially since the product was trained with all sorts of information (including junk data from the internet) and no prior study of potential psychological harm was conducted before its release. The risk assumed by the company was not accidental, but planned: OpenAI wanted to expose its product to test it on a huge mass of users because this could potentially be profitable. Therefore, its liability cannot be dismissed.
youtube
AI Harm Incident
2025-11-07T21:0…
♥ 70
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwccL-tEf1teXcEePZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzw7TtO_yb3Naij-o54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyanQ_xog7LXQPjmAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKb3L8zLZKrL6noe94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzzl_KxEudZxMxGvKx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk_0cnXF8VxWHYhJN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsHbHQEflgaf_3n214AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzkMVc5BwMMVMOI7YV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy9cGqN9etDtKbQw2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydjVvWvFWStr9z0BV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]