Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what could go wrong?: Bias and Manipulation: Dictators and cartels may use AI to…
ytc_Ugw3KSmdW…
G
wiping out jobs with robots and ai might be ok assuming we re write our social …
ytc_UgxwM9dne…
G
I'm with ChatGPT on this one. ChatGPT tries to be practical, while Alex is talki…
ytc_UgxrvtjLB…
G
I work in software and let me tell you we are still a long way from mimicking hu…
ytc_UgzbgYA3w…
G
I'm thinking within the next 25 or 30 years we seem to be heading towards an " I…
ytc_UgwbM1Q3H…
G
100 years from now little AI children will watch this video and recognize this m…
ytc_Ugyw522_w…
G
If there was another platform other than you tube that gave me the information I…
ytc_UgznHJHAz…
G
I know AI will replace software engineer and after 1000 yr may be everything is …
ytc_UgwDcoWSi…
Comment
Really nice follow-up! Appreciate you going in to more detail on copyrighted work being used in training data and the evidence we can see of that.
I wasn’t familiar with the license stable diffusion uses before just now, looks like it uses a strain of “Responsible AI Licenses” that impose some basic usage restrictions that would be hard to enforce but do not deny anyone from profiting off the models or their output
youtube
2022-12-10T22:5…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwoXh-91pnZLxFGOfd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwQGh4cXKRht9aweBV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgzEwzYieuWmk91DQyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugy2d1fI1QLWzWGjGxJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgzcxCl8hLMEAquxZL94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwUt3WwmB9La7Re-QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyFzQOxbbjG5gaMF7R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxUcOJVfhgVzuf71QZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxucpX2kuLOKPA51X14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxR0mSD1idVfltQS2d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]