Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need Moses to save us from the industrial revolution of AI technology and AI …
ytc_UgymWG_T1…
G
@interdimensionalsteve8172 The massive increase in social safety net programs im…
ytr_UgzL-20S6…
G
Facial recognition is based on data collected from police statistics. As we know…
ytc_UgxOAjtPe…
G
In French ChatGPT sounds exactly like “Chat j’ai peté” which means “Chat I farte…
ytc_UgzDg3A7t…
G
Did ChatGPT ask a personal follow-up question like he's interviewing you and is …
ytc_UgxuCqB5a…
G
Appreciate you and your team, thank you for all you do.
Thank you for putting f…
ytc_UgwmDc8O1…
G
To be fair, the bot stayed in character and didn't say anything crazy. It told S…
ytc_UgzkF6tH3…
G
Bro talking about ai, I saw a dog clinic with ai dogs with clothes 😭…
ytc_Ugzx5ZyTg…
Comment
On a depressive note, I don't think we'll ever be free of this problem. Even if the current lawsuit on stability AI goes well and they are forced to shut down their models and only use copyrightfree datasets for training, the problem is that anyone can easily fine tune the base models on any artists work. Because of tech like LORA, anyone with a 3060 and up only need to grab about 10-100 images of some artist, and dedicate a few hours of training to obtain a LORA with their style. You don't even need to be technically inclined because people have already made easy to use Web UIs to do it. Once an artist publishes their work online there is literally nothing they can do. The same problem applies for non-consensual training on real people.
youtube
Viral AI Reaction
2023-03-01T18:5…
♥ 103
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzRGyntK5ZNPPVvVsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwxH0w9QIigxEF-J4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJqWllfj88Xw6XT8Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxH4ItJFtgNhBR4bGJ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZA9ZIXsqHAABjTod4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-0xCnKjo1U9UAVC94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8w6PERlRjVSp8LfF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMJlIx2g4YEnruacN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwydhVbLj-_IO7NQNV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyibOEh8Vx1LIrIVTh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]