Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Several years ago I read an article saying the thing that would defeat driverles…
ytc_UgxwKAj7A…
G
Ironic that the HR company, as one of the sponsors of this interview, leans heav…
ytc_Ugw90JqOu…
G
I am a human being. And when I create something it’s unique. When Ai does anythi…
ytc_UgwiBN_jY…
G
I think what’s not being considered here is that for every action there is a cou…
ytc_UgwbEZC7h…
G
No on the digital I d and the social credit system , stop the control over peopl…
ytc_UgxDMup-g…
G
I'd like to believe that if AI ever hit the singularity, it would most likely se…
ytc_UgwLksCPJ…
G
AI is not even intelligent, let alone conscious. It's a text probability calcula…
ytc_UgyMyJZYn…
G
And after all the concerns he presented about AI, he is still encouraging people…
ytr_Ugwxi1hnU…
Comment
Blaming ai is lazy and embarrassing for an md. It says on every new prompt that ai can make mistakes and to not seriously rely on it. It tosses out safety-disclaimers if you so much as ask it about altering ANYTHING affecting biology.
I mean, I tell it I am bummed out and it starts in on platitude-safety-garbage for three paragraphs because of not-so-smart people out there blaming it for everything. Its guardrails are set up so high that it's effectively castrated from 60% of the help it could give.
I vote for adult autonomy in regards to self is all I am saying. But I completely understand the lower-animal insecurity when it comes to ai. But it advises better than 99.9999% of humans on average.
youtube
AI Harm Incident
2025-12-02T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUBcU4DSZbXIQaAfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX50B-MrjomoKQGu94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPatLtsj91sLLXGRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwA1K7rtMfStX2KXvx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyom-g-AkC3wP68yZJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwfYh-db8SaZy-kVTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwqsqhCWUyOeYKG23Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxTTjflKLyx8ehnflp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyesUj-YTXC6PY1xL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7oZgAZiLDR8qrPmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]