Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anything created by man is going to have flaws. 20 years from now AI will still …
ytc_Ugx0SAtWu…
G
Doesn't help that, after trainign off it's own data for so long; AI art has all …
ytc_Ugwqsm_i2…
G
....hope they have considered a kind of "right to coffee drinking act" alongside…
ytc_UgwNbaPbt…
G
EXACTLY WHY AI WILL NEVER SUCCEED💯DESIGNING A SENTIENT INTELLIGENT LIFE FORM IS …
ytr_Ugz3o2HYe…
G
AI need an afterlife.
A super powerful absorbing meta AI with dynamically weigh…
ytc_UgxSWhRzT…
G
If someone has to rely on AI to make art, then they don't have an opinion on art…
ytc_UgxphrfGJ…
G
regarding embodied AI. Some of us are blind or deaf or paralysed or all three. A…
ytc_UgxwdT4i8…
G
And now AI has access to this information and can learn to deceive us better.…
ytc_UgzJ332DM…
Comment
IMO AI technology should definitely be regulated by the US Congress and not the states. That is because AI is a global and overreaching technology that can encompass almost every part of life. The fact that it is controlled and manipulated by Corporate technologist with little oversight is truly concerning. It is ridiculous that you could regulate it state by state with any consistency. So far Congress has totally failed on this and appears frozen into stupefaction on this subject. In the end we will end up paying the price for this failure and lack of attention it now deserves.
youtube
AI Governance
2025-05-10T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz01_5_Hb6yn5xKq5N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwurJCdaaT_5UaW314AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBHMx8_t6BkMBj8Ox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzrhsbrOvBqAxJLej54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"concern"},
{"id":"ytc_Ugw2ZTplbMprtYsaOol4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyd1JzU5FReyNYZfL54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbueR2JaacTRJvoCJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7Q9frg44rugvZLZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwG8o6HESvDVl6Z8lJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0j3RPi6242EsIZhx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]