Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only in America.
First human robot, First cyber truck, and they use it for war.…
ytc_UgwvxxfNT…
G
The AI conversation needs to explore the distinction between Morality and Ethics…
ytc_Ugyv_sZjk…
G
Why is this a surprise? LLMs are trained on data that included all our science f…
ytc_Ugzhrihmz…
G
the problem with ai art is sometimes i cant tell if it is ai art but sometimes i…
ytc_Ugw9P_EGG…
G
I think one of the saddest things of the AIS to be honest is how they don't know…
ytc_UgzCi5oIU…
G
“Yee hah” said the robot as he put on his cow boy hat and was handed a machine g…
ytc_UgyHg_xwJ…
G
Within 10 years, we will have F-bots answering to digital Pimps, standing on th…
ytc_Ugz3D-34-…
G
How is this a good thing?!!! 🤦🏻♀️ People losing their jobs?!!! AI?? REALLY....t…
ytc_UgwZlyVPN…
Comment
I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.
youtube
AI Harm Incident
2023-01-16T02:3…
♥ 53
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyPC9oD4c-VNQt32Yp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTsskp3Wt431jO0ZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxspuduEXafDs1CfBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzEVsldhtgbA3OX014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt6LVhJKf6zq4_0QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGGTuEXltTylPWlbh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymbGFVHjlFNcQ7GEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkQBs7Pi9fjLsqbT94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy2byRzHh5vXFMz-N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzySHsGMUTWf6H8xSx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})