Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI's work syntactical, not semantical 🤷🤦🤦
Means they simulate, without having a…
ytc_Ugybrg_Rb…
G
Ive seen a real P.a.c.m.a.n robot arm tossing 50lb cases of green beans. Theres …
ytc_Ugx5Y-LCU…
G
You should have told Dan that this has happened before. China implemented a one-…
ytc_UgwmvTF9G…
G
AI has electronic characters, dangerous. Humans? Great brain, remember AI came f…
ytc_Ugzhsb53k…
G
A calculator that get things wrong 15% of the time is still a bad calculator, no…
ytc_Ugx6el1ws…
G
training on human data may somehow be limiting to the AI not to be smarter than …
ytc_UgxP3KFBR…
G
if there were no, or very very few humans, health care costs could be reduced to…
ytc_UgzCP5d2s…
G
ChatGPT needs to exam the parents how they abuse mental health of parents. How n…
ytc_Ugw3k83Jp…
Comment
An experienced doctor might answer well because they have experienced the scenarios. Now you say that AI didn't have anything available to learn from. Then in what way was this test fair enough. AI will replace doctors. That's a sure thing to happen. It's just a matter of waiting.
youtube
AI Harm Incident
2024-07-19T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzm3v-0AJPeTDGrgwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxSqTj30OmvHOnf3hZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxjqxS1AqLu6mqh9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4tvg5bMaIAf273bZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxYR9WRLnOALHEPAxJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPg56ZQhyUaGZ0Iel4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW6CC-FG44Hcc3C3Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy2qhWGCLZtN8twC3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeJ0eS452dxHtBF-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxjriUE9InF5RDqNON4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"}
]