Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Will smith in I Robot asked if AI could.... And now that AI is better then you w…
ytc_UgymkMcRt…
G
A message for fellow artists:
The future ahead is very uncertain... with rocky …
ytc_UgxEooa_e…
G
AI art is literally the most useless pile of turds on the planet and does more h…
ytc_Ugz-_xxK2…
G
Just imagine, if in the future there will be artificial Sperm generators and art…
ytc_Ugx_ZhsrP…
G
@ReactInfo54 There is a huge difference between generative AI and assistive AI …
ytr_Ugw8vNCEe…
G
thank you for a level-headed argument ✌the doom-saying and overly negative echo-…
ytc_UgzIQaJ5L…
G
AI is the Trojan Horse designed to destroy the American economy! Wake up America…
ytc_UgwmLH2J_…
G
The developments in A.I. make me think of that saying that goes along the lines …
ytc_UgyGamr4C…
Comment
As an actual researcher in biology, I laughed out loud when he said AI can do PhD level biology. I've used multiple AI programs for very simple tasks like "make a list of all known casual genes for X syndrome" and it misses many and hallucinates others. Even with 90% accuracy, having random BS thrown in even occasionally can severely ruin science. Humans make mistakes, but not like AI. A human may space out and forget a zero in a calculation when you go back and check your math. AI will straight up say things that aren't true or don't exist which is WAY worse. I've never accidentally written down things that never happened in my lab notebook. Human mistakes are akin to forgetting an egg in a cake recipe. An AI mistake is akin to accidentally throwing in a tsp of poison into the cake. A human would never do that. Our mistakes are different.
youtube
2026-02-14T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwrKcsgfDzFNjAD22V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXJ5Vm5UT18qnSli94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzrpGiAjvYLutmNlFx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxt4ABObDDyusK61yB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNsOf7VjQFx62bA9N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyBZ-EGn5EkRzOcIFd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwA99H5lzdY9Dr2Q694AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyJhUy98T89HWdTyMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeiG7YS3hX0VvuitB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLYvRdyhv-PMG2tBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]