Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
humanoid robots would need to make fiscal sense. Large companies will adopt fir…
ytc_UgwgnYZkp…
G
I can't edit here. YT is buggy, needs more AI. However, I think the US does bett…
ytr_UgyoFcxQn…
G
Current AI, yes. The 'risky AI' part comes up when the AI gets to be smart. Unti…
ytr_Ugw-vK8R4…
G
The only people mandatorily made to join the Survival Lottery would be convicted…
rdc_ci2bfml
G
Human capability is finite.
Therefore once machine capability surpasses humans,…
ytc_UgwB-IEvA…
G
"""contemporary art""" *is* worthless horseshit and you will not be taken seriou…
ytc_UgyAWrro_…
G
The danger is -50% of the workforce for a start. People becoming uber stupid be…
ytc_Ugy38w6-4…
G
I THINK THE BIGGEST FEAR HUMANS SHOULD HAVE IS IF GOVERNMENTS USES AI ROBOTS FOR…
ytr_UgwMGh49o…
Comment
Medical science is largely uncertain and there are huge knowledge gaps and lack of data in complex diseases and research continues. AI can be good only as the data on which it was trained. And when there are gaps it is likely to err. But at the same time, pattern recognition, synthesis of existing knowledge will be done better by AI as long the data set in which it was trained was accurate. The bottom line it will remain complimentary and frees doctors to do more research to answer these gaps. It learns from humans and will continue to learn from humans but it cannot become so perfect that it cannot create new knowledge all by itself. It will be great tool.
youtube
AI Harm Incident
2025-08-17T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyp0HKWMCHIx6TWFYF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZN9xfyGorjH-TCT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxk_hyjYpeqsEsGmF94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyIb7_ZtOb0Mt2NU_94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOa8JpvIAveaJbWlh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgfJocbf_IgfibKgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzAHSmJrSykp7Auzd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMyF7xGJSE8r2Zzmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwS963bYWHgWI8QKBZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHBIX_fvnxSrJ2XxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]