Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was involved with neural nets in 1990 and predicted that AI was going to be th…
ytc_UgxpOaYd0…
G
No it's edited not from chatgpt, if it's from chatgpt than we have to stop using…
ytc_Ugx0Z_3j7…
G
...so why are you mad at this video? AI isn't stealing, which means no artist's …
ytr_UgwZVtaCq…
G
@maoopanyeah ai bros be speaking like a lunatic tyrant “someday i will takeover …
ytr_Ugy_0kGeg…
G
the most ironic part is the fact that these "artists" can't even own the "art" t…
ytc_Ugw3kGLgf…
G
What I truly hate about AI art is the people who use it to generate something, t…
ytc_Ugy8Nr1xv…
G
@m3rcher But still, how can we say it's not conscious? If consciousness is the a…
ytr_UgxzIPHbI…
G
Tesla is literally developing a humanoid body to help the AI do the things this …
ytc_UgzssxmUI…
Comment
When Insta first made it impossible to opt out of AI scrapping I found an article on how ChatGPT/OpenAI "taught" the AI to detect explicit content. They outsourced to a company in Kenya where the workers made less than a dollar to spend hours reading written acounts of graphic violent and intimate acts being done on people, children, and *animals*. They "offered" counciling to the workers, but did not allow them to do one-on-one meetings, only group sessions, and they were often discouraged from going as it took away from their work time. Apparently this system broke when they wanted to start making the workers tag images of those topics, not just writen discriptions.
youtube
Viral AI Reaction
2024-10-21T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwUrE1ULy3DoiEc7Ep4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8lh3mBSTV03rzdnt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyEgHFY_43yMMlZMDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSjgypOs0DSmSRQjl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBeldO9wgpkb4iQY54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3nsTDTrSM-PZyDzZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxIB3zLtbMTiAY9pjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwS8NJwmsqmTglsKV14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLokyqV9_3Lp6-UZ94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzn8I3LWTfvKM0WW5Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]