Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs are not conscious. Does not mean we cannot build AGI. If it's capable of sy…
ytc_UgyZpOzJb…
G
Am I being paranoid to think that there's a whole group in the Administration th…
ytc_UgxB9-A7-…
G
We are already losing jobs to AI. AI already told a kid to kill himself. Another…
ytc_Ugy9cnLj0…
G
A self-generative super AGI agent under stealth, I cant think of a more formidab…
ytc_Ugx0isCCM…
G
I don't know why y'all don't see a bigger picture YouTube is trying to censor u…
ytc_Ugyssmd9p…
G
Stealing existing ideas is not intelligent. Being able to scan existing ideas an…
ytc_UgzX0smAu…
G
Most if not all of these drivers were at fault they are all bad drivers every id…
ytc_Ugyqkm2hK…
G
AI is too good to be a CEO. We need AI doing science stuff, not sitting at a des…
ytc_UgynFb8ng…
Comment
Whilst this story may come true at some point, I think the timelines suggested are not accurate. Think how long it would take to create all those robots and all that infrastructure to change the world like it suggests. I'm quite sure AI will change the world, but at the moment, AI is also engulfed in a lot of hype. Most companies have awful data quality which greatly limits the value AI can bring.
youtube
AI Governance
2025-08-03T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx8Nd1PjDcfP8Fx-Ol4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVkm1b1V4R33zyIbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqwJuYYtX5p-ik1ip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt39nMYxSVMsBfUYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7N4b0dfgFQCOxoQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6iL8Wt93ICpuDy_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz-NyNbvt5hTCQVskp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLlQ_TuZmKLuBdNax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3dTkvCIPOxXJiFL14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxmQXvNePKfzmfn4M94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]