Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a lack of Surgeons and they make mistakes so there is a need for machin…
ytc_UgwyGZyZm…
G
Just store enough of it on a blockchain and have protocols for access to anythin…
ytr_UgzQBnzqF…
G
There was once a point that copyright/patent laws didn't exist. Peoples had a di…
ytc_UgxSTr_wO…
G
This is not the point of view of music streaming platforms who are using Ai to m…
ytc_UgxD_Q9AC…
G
As an underground laborer, I will not be replaced by AI. At least not for anothe…
ytc_Ugx7x2ASS…
G
Elon keeps on doing 'A.I danger' vids, yet he is also a major player & investor …
ytc_UgxJtlfRy…
G
Testing kits that work and are approved by international agencies are what we ne…
rdc_fjzj0jp
G
But it does help with getting the storyboard done and get the basics In place p…
ytc_UgxHUCgNJ…
Comment
I also think that the comparison between AI training and humans being inspired to be deeply misled. Humans and AI really don't think the same way and don't engage with information the same way. Comparing the two would be a pretty obvious false analogy. Imo, this narrative that people and AI can work in the same way is straight-up dangerous considering the potential scale of AI technology and how people seem to blatantly ignore the principals of basic empathy when concerns about AI is raised. I think AI can be a helpful tool (especially in fields like medicine i.e. protein folding) but when we actively refuse to even acknowledge its flaws and instead choose to cover our ears and go "it's inevitable... you can't stop it", we effectively set up AI technology to be misused and abused.
youtube
Viral AI Reaction
2025-03-30T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1QS-PJjsuYqBJejd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz61RF9m6M7bFdtVEp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUyNW0njCtSjHPTnZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmML5NnXLI2CGXunV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzGyBeD7zqEDZCETUN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzM7-CnGq_suE_xgk54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3o2JoRH-g3BXtpb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxf78FDZ65-MFFno-Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxzKLnijRPegafUKAN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXdVUmRxSA2StaFI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]