Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think UBI should be implemented, i think it HAS to be implemented, eventually,…
ytc_UgzXsDbk-…
G
Shocking that a nation might want to protect its sovereignty and the sovereignty…
rdc_grs5nng
G
1:37 Correct. AI saves money for the ultra-wealthy 1%.
We could ALL benefit fro…
ytc_UgwDgq3ts…
G
I've self-published on Kindle. As you might expect, Amazon's automated approval …
rdc_fsy71nl
G
There was an Acton Academy thatI looked into for my kids that had a similar conc…
ytc_Ugwc-hG2n…
G
The AI art defenders are some of the most insulting, disrespectful, entitled, an…
ytc_UgxwLqxkM…
G
All things considered, I'm a lazy piece of [silly]; I even used generative AI to…
ytc_UgxYChFUc…
G
Y'all think this is fun or funny but on my opinion I think we should get rid of …
ytc_UgxPXcdDG…
Comment
I think that any AI model should be required to be trained on 100% legally obtained, opt-in data. If even a single piece of data is proved to be used without permission or that permission was revoked from, they have to purge the entire AI model and retrain from scratch or from a checkpoint before that data point was used.
That would make these giant companies care about how and where they source their content. Would also allow any content creator/owner a course of action to protect their IP.
I don't believe it is not fair use if you can use a creative prompt to get the original training content back out. Sounds more like plagiarism with more steps.
youtube
AI Responsibility
2026-04-16T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxamUxoV7xAGSn4c9p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQvrzmAYOmZNS2kdx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxQjZo509jN7CrJk5h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxY1ndoiW4xrAD-9IV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKwEU9n-7MqraomEV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx-kvdf2U56rJ4551p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwVXdZ17kzO1vDUm1B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZqAMUpucMGMDmUap4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIH_OsgG9IaCRMIGh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwDow8uK1cKlEpBgkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]