Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
4 years later and this is pretty much reality. AI face recognition + FPV drones…
ytc_UgzwjMOGY…
G
Programmers: AI doesn't work.
Me: lol AI made me a game.
here is the real pr…
ytc_UgxIM0K-v…
G
I will never use the ghibli ai because i support artists and rather draw it myse…
ytc_Ugy2E-Rvu…
G
If you interview all this people that are giving people all this advices why are…
ytc_UgyklS20n…
G
Why bother to home school? The AI will still be smarter than your child. The onl…
ytc_Ugzs8rtw8…
G
Exactly.
I worry that a lot of, lets say "technology enthusiasts," are letting …
rdc_j8arnw2
G
1:11:28 I don’t know why it hit me but if we let AI guide our decisions without …
ytc_Ugx5QLirj…
G
AI is only as effective as the algorithms- which are written by humans. Explain …
ytc_UgyWPsExX…
Comment
The issue that I see is that developers of AI assume (wrongly) that governments and large industries (big pharma) for example will use AI to benefit everyone. But we all know that won't happen. Knowledge is power and such power leads to personal wealth. As an example, if AI can dramatically improve productivity, then GDP in the US would improve to a point that all debt can be paid off - IF the government wants that to happen. But it won't. I see this as a danger to the average person's way of life, no matter what country you live in.
youtube
AI Governance
2025-07-16T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxCsSHsS1aDOkbUaUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_rtIfVUZc9OucuvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6EXfYVnA7jGjcsnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzJjURfdeHVAa342B4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyW2Fc6T8E0Iluvo9h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcOqKRQ1s0kEkQcoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugye8Efm6R7xg_vmCKF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwYhiUkPadyv4WVKS94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMSusMLzUsfTs5KMh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw04wMt6ypwd8b2sCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]