Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought and still think AI is a nice concept but the way it's used is not okay…
ytc_UgwuTOeu-…
G
Writers gering replaced by Ai?
I mean, I'm with the workers here, but not for …
rdc_jj5maq9
G
If an AGI is able to interact physically with the real world (which requires bot…
ytr_UgzTsdWzj…
G
The one who tried to feel fight the ai but then...
*curls in ball and rocks bac…
ytc_UgxFYVvnX…
G
there's one thing i can guarantee you is that ai bros are NOT financially unstab…
ytc_UgwwXYkFk…
G
On the long run cheating is giving people bad luck, I guess. On the short run th…
ytc_UgzA7fuPY…
G
It took 14 Billion years to get a thinking human. A.I. was started in 2010 maki…
ytc_Ugy1k_eyS…
G
Except that’s how it worked in the early history of AI and hasn’t worked like th…
rdc_jmusf1d
Comment
0:04 Lol......if that's the case, then where would humans get the money to spend on stuff so that companies can continue making the tech that enables AI to do all the work?
Every time some company or genius says that some technology or invention will end poverty, world hunger, age-old problems on the global stage, we need to be wise enough to question the how, instead of immediately celebrating the potential of a solution. History has had many of such announcements and even implementation of said technologies or claimed inventions but poverty, hunger, and other global problems still exist today. Some have even gotten worse. We all believe that newer, more advanced tech can help to solve problems. There's no denying that's possible. The real question to examine is, how?
youtube
AI Governance
2025-08-02T05:1…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxiVOBFVvOssXkyQ1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxv-v1nEHM6eg42AOh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgxNy2zApTjrj9uP1PZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw57UzGdyIb2P9rjH94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMWmcGt8bRTWR4N7p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyHVNhikmo1KVJ6ofR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyjaEfxBY5kxC7Gaal4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyewHFrBAg6NOtzgmR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwoosx9PUB12NSj80R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWRiNizuhUFubr9DF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]