Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All this, just to have driverless taxis? What for? That's someone job, you know?…
ytr_UgwnJcjO3…
G
Based on history and the human condition.
AI will be used by the elites to cru…
ytr_UgzMQuvEt…
G
The time and money you spend on training a new soldier vs the cost of operating …
ytr_UgwIOVMzI…
G
This is my advice, ai is fine, but never rely on it. And if humanity gets to a p…
ytc_UgwVhQqMG…
G
I think the real danger of AI is the opportunity cost. When we have all the clev…
ytc_Ugw7NWzCM…
G
the reasons why facial recognition software has such issues with recognizing bla…
ytc_UgwhsXQof…
G
Maybe Ai will take human trust more serious than humans do. That I wouldn't call…
ytc_Ugzar7tCQ…
G
Dear Ai prompters: anyone can type. You don’t have a “””creative vision”, you’re…
ytc_Ugxpt23S0…
Comment
Here is how this can be solved. A complete ban on all use of A.I. as all entertainment should remain human made. A complete disenfranchisement of all woke writers as they produce crap content that the customers don't want. As well as healthcare for the actors. Stunt devils don't get impoverished having to pay for injuries studios benefit from, actors get to keep existing in general, talented writers get to do their thing and the woke ones get a reality check, we customers don't get a black transgender marxist sausage down our throats and studios get profits. All is well.
youtube
AI Jobs
2023-07-15T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzp4LfGCxQMT9EIC154AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzirinUflrY-G0z-xx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxhl7NRlaE72yb3v4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8whZEvjmIO1U70tF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHYZ-2_0zK9A8s7EJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZqDMhuVhfT-uEKc54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRCXc5WrjUkgLRXMF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxsK8jZkwvDQxLo0XF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoIyURE8c0wlRVO0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzlye9l-EWIdSs42vN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]