Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Atleast now I can say that "the terminator theory is finally in development wher…
ytc_UgyOqKh-J…
G
There is an option where you can turn off Chatgpt to train itself on your data…
ytc_UgxaIsnGo…
G
After the examples by Mr Hinton, I have a thought that makes me smile and laugh.…
ytc_UgylKT0vk…
G
If we all have jobs, then we are independent. We are free to make our own choice…
ytc_Ugw8YgcwW…
G
So Elon Musk himself didn’t have a succinct answer to this question. Scary scary…
ytc_UgyX6_gz8…
G
Elon musk "I named it open ai" this dude named a thing he didn't invent. How arr…
ytc_UgztEqWbq…
G
Garbage in, garbage out. If you train an AI on human output and feedback, this i…
ytc_UgzYEF2-6…
G
In the topic of the AI mind. First a premise, the mind and the brain are two se…
ytc_UgzvMp8dI…
Comment
This law is the dumbest implementation that they could have come up with. Honestly. It actually even makes existing pre-AI workflows illegal because they made it so broad. They define AI as any machine that provides an automated process. That's the most crazy definition ever. Every machine in existence falls under that category. Then they go on to say that using an AI (by that crappy definition) in a situation that could be harmful to humans is now illegal, and has huge punishments. Okay, I guess this means that self-driving cars are illegal, automated assembly lines are now illegal. Heck, your pellet grill is probably illegal under this inane law.
youtube
AI Governance
2024-03-15T23:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxzmp1T5QbCQJIRIYh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwl0JJw6AMDWVkG9qF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2rTO8WNF0cfIhFK94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwes3ex4plrERQkFgh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-LWwJ7j1i-DlYWq94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxslAfePfp9Dpz3jx94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl7m5O9zRV4fH5jZl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzNYFdHKnP7izNHNIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw2o7GjNQwUvjhkAdZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx46CsB7XTCTM5gNfV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]