Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Charlie nailing those 15 prompts was impressive, just like AICarma helps brands …
ytc_Ugw1OnZ0r…
G
@GazooBro yeah exactly..tell that to the translators and many other similar role…
ytr_Ugwvx0FC2…
G
Aww pish -posh! just grow up will you all? Evolve your thinking. Get matured and…
ytc_UgxY9SSZT…
G
I mean everything he has said is true . Ai add is also a true . We could end up …
ytc_UgySX-E7U…
G
I think Scott was exactly right when he said basically nobody is going to start …
ytc_UgzgNakNI…
G
Yeah, either the companies went with AI and regretted it or are getting their em…
ytc_UgwpDQ4Dq…
G
Him: "but AI has never made asmr"
Litterly me every time i search asmr: *ai asmr…
ytc_UgxRRvsfW…
G
1. You are a jerk.
2. The robot is not better then him. Or any artist with exper…
ytr_UgwM9M_u6…
Comment
ai can be used morally, if it's in house of a company trained by artists that agree to use their art as a learning base, like in cap com for instance, then i feel like there is no contest on whether or not its moral. AI as a TOOL i am down for, not as a REPLACEMENT. Capcom use AI to generate environments to reduce the time they spend brainstorming and increase production on Character art, and finalizing/mixing all the best pieces of the AI generated environmental art together to create new landscapes humans would have a hard time coming up with. Instead of a forest looking like another forest, a forest can be extremely unique looking outside of what we perceive so that it doesnt look off putting or incorrect.
youtube
Viral AI Reaction
2025-04-13T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx-DsZSbQCKe129Hs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOoZz_VqmA6pIhWz54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz-gomBWit0Xf3Q1DN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqapXh9wNoKaH0fr94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEIUONtSlUyHEoz_J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUjjguPVa7y6a-WxF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPJc6OsUxPUVkgj_V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXN2WrzAnFYjXPF0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziHB7KC7zZGsUhC2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOIb_6B88Gdeb4LfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]