Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
actually, none of those work to remove the poison. nightshade's rather resistant…
ytr_UgxpcNBnX…
G
I bet the customers hate it.
People hated it when customer support was outsour…
rdc_jrpq3h6
G
You can give me my salary for nothing if you want, I promise you my dignity will…
ytc_UgzeqmmW5…
G
the ai in everything i have almost broke my phone and many other things of mine …
ytc_UgyhwJYcw…
G
We kinda cant slow down were in a.i war against china whoever wins agi controlls…
ytc_UgxiJvc0_…
G
It makes sense that the AI would inherit the terrible biases they were trained o…
ytc_UgyoOMy9m…
G
Laughable nonsense. AI is a tool that adapts to the inputs of the user. If you f…
ytc_Ugya1O5DY…
G
Yes he should stop the development of a body for the AI like robots. If everythi…
ytr_Ugx3wd-12…
Comment
8:59 this is totally opposite to what SHOULD BE done. AI safety is following the same model as the USDA and FDA, where corporations don't have the burden to prove their products are safe. Unlike in the EU where corporations must PROVE that food and drugs will not harm before corporations can sell things to the public.
youtube
AI Jobs
2025-11-19T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzv7zClPWdVgLkHMQF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz54ZzV7sGSa6sINeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyqSdd0BvE_wE2KTMJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugw8vHA84f4IAzDN2OV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgxjnZWsA4qp7l_7u-p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzYWeFpNqCodtx-32d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgyjRCbKrNGhgxlKPrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxeNOL5-cLlCMDKnLl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyO6XWAw6wWuBKL1194AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyQePWrRtz5-I1v1Wd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}]