Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like weird phrenology crap “blue blood”, no loser I just worker harder th…
ytc_UgxgZB8w0…
G
Even Zuck feels like some meetings could be an email. But what happens when the …
rdc_oh2vojt
G
These people are sick,..even have the guts to tell the world of the major proble…
ytc_Ugz-BWuxk…
G
@luluisaac2617well you're speaking of free old text models that you have access…
ytr_Ugwvm5KHX…
G
It is going to take tens of millions of people to develop, deploy, and to overse…
ytc_UgyriAIVm…
G
La ministre de l'IA est vraiment douée en politique, elle a dû faire l'ENA. Elle…
ytc_UgxQr_Rzw…
G
I think it says a lot that the Picasso example kind of starts with the result an…
ytc_UgyKjgHQ9…
G
*CLICK BAIT!* WHAT HAS always puzzled me is how EVERYONE sits back and talks abo…
ytc_UgyauZfH6…
Comment
Yes, if you use Ai badly you will get bad results. Shocker. All you would have to do is make a better and more specific prompt, or use a better Ai model, or, if it doesn't exist yet, literally wait like a year and there will be huge advancement in Ai technology, just like there have been every past year. This is a very narrow minded way of thinking. You know what, if anything you can even make straight up portraits by just taking a photo of your subjects, capturing the emotions and what not, and upload it into an Ai to make it look like a drawing or painting or whatever you want.
youtube
2025-12-05T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyyxxycNN7R3mJEusl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8GfvTBNpv21Bub4t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5pexF5FatEj5HalR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzI8dww9o7ahM7dlj94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxA7oO1DrBLxluGUBJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPbBpJeoZO8Mssxht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxDfNF6bcBbB5LfufB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-T0rV_aSCS3KFOy14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNG3ziW_cS6bjA5Nt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDmcEHFITIIWeh9UZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]