Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI using pirated copyrighted material to make plagiarism is not gonna make AGI c…
ytr_UgzRA6XNL…
G
Hope you say that when you get replaced by AI, which will eventually happen to e…
rdc_lv8cgsc
G
I work with AI everyday for data analysis. I use all of them, Deepseek, Chatgpt,…
ytc_UgzUWJDOc…
G
Facebook did the dame over 15 tears ago, there had to be someone proofviewing th…
ytc_Ugz6xMqIv…
G
Why are people assuming AI replacing humans isn't natural? Humans have always ma…
ytc_UgzY6AtOE…
G
@MaganDaniel, I like how you incorporated this TED talk into your Digital Citize…
ytc_UgwJYMoAo…
G
Ok, but you need to hire a human to catch all the errors the AI makes. Gotta cat…
ytc_Ugxaf0hog…
G
We appreciate your feedback. The interaction might come across as eerie due to t…
ytr_Ugye41zsm…
Comment
This is a very ideal, best case scenario (for AI proponents). The recent MIT study that showed 95% of AI projects fail to deliver. Also this is great if your company/country is heavily digitised and ready. Most are not, even the US. There is this perception that everything is seemlessly digital, companies running off edge computing or in data centres with all their data, processes and workflow automated/optimised. That’s simply nowhere near the reality I see in my corporate life. The AI might be ready, but the foundations it needs isn’t.
youtube
AI Governance
2025-09-04T08:1…
♥ 62
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGhBBVR9Xb_DlOeLp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwpyVoEMqGt4p_DS3N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6Ibr3gdEp5VLaPDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy5hE0p0fcflUxvk2B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydmNxJF69tJJq_Nxl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9kPiDfPjAn1yADad4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyNxvhunuQqRWkcqZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7Eda0GrwAi641dXp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzjnMOmJW4K6akSpMt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybwcRhrdTaCPbK09t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]