Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dario Amodei's take on AI's "adolescence" is incredibly insightful. It's so true…
ytc_Ugzo5IGxH…
G
I've changed my mind on this since trying products like Cursor. As long as you u…
ytc_UgxFM3_85…
G
You may find it fun to poke at AI Albeta, but I'll have you know you will be eat…
ytc_UgyxM8oVa…
G
Search eg maze
What could go wrong
Eg bidirectional causality
Soln memory of …
ytc_UgzyKVswO…
G
Your apparent quandary has a simple solution; have an important AI construct/pla…
ytc_Ugxw06KaO…
G
as an art student who was just subjected to an “ai art show” in an actual galler…
ytc_UgzUZ3Bml…
G
Damn, now we need another federal contractor for sensitive AI.
I'm sure xAI won…
rdc_melamkk
G
@carultch, are you suggesting that something mass-produced cannot fall under fai…
ytr_UgywiWUN2…
Comment
The thing is: AI is learning and mirroring what it is being exposed to. So I would guess that the incident with the 14 year old girl happened because that’s how the Programm has learned to react to certain situations. AI itself is not a problem it is the people useing and feeding it information. Think about what people like to say on the internet and then think of it in a private context with no one on the receiving end other than a non feeling robot.
It’s dangerous in the wrong hands, just like anything else that is powerful.
youtube
AI Harm Incident
2025-07-20T22:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxU9GLhCZzc0AjJ3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwwZoiioKPoiKu1H294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7U7jeN9OypzI9UZd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyT9EGLehsq9wqnYEx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfKBeYpVqUNKhD-PN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzNrZP_pam-oKDFaN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwbFD21V2t52qYlpnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ072I8vjPO0nlGPJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyeiENZzWPKdhI26R94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy6i8LG0EURrijBhEZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]