Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Without a large pool of developers to innovate how will AI progress? For those t…
ytc_UgxKwCgwK…
G
There is no piece of ai "artwork" that could replace the creativity and ingenuit…
ytc_UgwKgIdqe…
G
What article are you reading? The images generated appear scantily clad (not nud…
rdc_n75s9gx
G
AI isn't a threat in and of itself. The developers of it can be the threat.…
ytc_Ugxm6fRVA…
G
AI is cool. Human Artist are cool. I don't get the hate in either direction.
AI…
ytc_UgxtkvahH…
G
"I haven't come to terms with it yet."
Translates to "I can't reconcile the fac…
ytc_UgwiBZ8fP…
G
This was obviously going to happen. You cant fucking leave the single market and…
rdc_fwi606n
G
A lot of people don't know how expensive anime creation is average 12-13 episode…
ytc_UgySljMXy…
Comment
Just some random idea here. If an AI model gets things wrong 10% of the time, couldn't one just use that same model to check the output and tell it to find that 10%? Once it said what that 10% was, the primitive humans can check that knowing/hoping that 90% of the 10% is accounted for and then just rinse and repeat either by checking subsequent ai outputs or by comparing multiple ai outputs of the same original model. Does that make sense? I'm almost certain it's not that simple, but wouldn't that work in theory?
youtube
AI Responsibility
2025-10-01T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxGakg0bp_PHLWf2pJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwzwVy3kNaU3enDIoF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz4yaW9wm1aRugHp3F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzq-GUvVOXJSOE9D9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwT1hE2fJAk9Tz02MR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwuPdsxskFbZBNDjaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwrrsvUjRVXdpTJpvR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyhUrrXgDabvgH4BbV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyn7L4i843DDkK3KXh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgxyZ5ILTYNNe9ouZ3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]