Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So what you're saying is the ai was correct and you're interpreting it as racism…
ytc_UgxMKc3yL…
G
I also think AI Art can be considdered to be Art BUT u cant say that it is a Tra…
ytc_Ugw_b4QmL…
G
From the man that sits there on JRE saying how scared he is of AI and the direct…
ytc_Ugy7Wm4nW…
G
Whenever i see someone who says AI art is unethical, i know that they get their …
ytc_UgxKFr3ZD…
G
Yeah self-driving cars are dumb. We should just invest in high-speed rail like t…
ytc_Ugxm3K9l7…
G
As someone who won't study art but has it as a hobby I absolutely think AI art i…
ytc_Ugx7EB3YZ…
G
ai "artists" mad cuz got no talent and think they are good for stealing art…
ytc_UgyZeZ4xg…
G
AI and robotics = exponential tech. Robots will build better robots. AI will dev…
ytc_UgzEH9uaj…
Comment
A lot of fear porn wrapped with some flawed assumptions. If Mr. Yampolskiy has been working on AI safety for 15 years and still has not offered any solutions, then he is either not very creative or is just milking the fear for all its worth. Given the size of the LLMs today and the potential size of a super intelligence model, its will need specialized data centers to host them which can only be done by a handful of tech companies. No one is stupid enough to let a model have free reign and left unchecked in a distributed hosting environment. AI companies should create a hosting environment that allows for human handlers to cut power and internet connectivity with analog command and control safety protocols. Its a kill switch. The guest is thinking of "soft" logical controls for safety which is pointless if the super intelligence is smarter than you.
youtube
AI Governance
2025-09-04T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgykGSqWBGuuze0bbsp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbqaiiEQr7S0SzacZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhelrLUprqtP3Gm7V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugww9M3xU6YqoX0oqnV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqPgdFTZ6FNLw2daN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfC3tToE2-VMXt6nB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDM4VnIn85Q1k6cwJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAphtYyog3h0C7czB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxE1Ad6jqtdgDN8ZYN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0h_awXKcOeY-bqlt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}]