Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The issue that I see is that developers of AI assume (wrongly) that governments …
ytc_UgwYhiUkP…
G
I have typed this too many times.
We need to stop focusing on AI and fix natural…
ytc_UgytGMUdD…
G
Look, I don’t even like say this, but does aroma de look up look like humans but…
ytc_UgyLMAM_S…
G
I don't think it works, when 'certain' product's or a service has different reg…
ytc_UgzLkwYCd…
G
The fact that so many AI "artists" don't disclose their use of AI indicates they…
ytc_UgynJzRLC…
G
Wow, this is making me so sad. I wish AI could be this great thing, but this doe…
ytc_Ugxe7nqtF…
G
in this day and age you should not have any videos of yourself, pictures, audio …
ytc_UgxDBxHXa…
G
That's rubbish as chatgpt was never willing to divulge copyrighted content. It s…
ytc_Ugy8zo1QD…
Comment
This is hogwash. AI is a buzz word. It’s LLM, neural nets, expert systems and it’s at its limit already. Basically only gets stronger with more server farms and that’s dubious at best. Here’s a question for the layman that anyone can understand.
If nobody bench marked the initial data being fed into AI, no standard level recorded in stone somewhere, how will we correct for drift when papers and projections are submitted that are only 85-95% perfect, and those go into the next group of papers and projections and so on? Who is going to say what is and what isn’t accurate (like precision mapping on weather patterns) and more importantly, how useful is unverifiable data? Who will buy that?
youtube
AI Governance
2025-11-28T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_4_WVTOtoyi3Yt854AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxBufL1kr4Eh5i33294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxmODGjBG_zEVwS3C94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxl1XVLTt3vQp8XhHh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwQjR4aUFHe8v7LBGt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzT-ch3Nm_-lnXGDK94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx9DwqEj54KG6bdNVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwR2zHd-3K9mkJN7Gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaEQYDF6i_HvjaQ314AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3JM3RsLF5CED4Jr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]