Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Actual AI plagiarism detectors”
VERY BAD IDEA. There is no way to detect an AI…
ytc_UgwI07zAm…
G
I got emotional watching this. WTAF are we doing here?!😳😖
I’m a sculptor in the…
ytc_UgzG4BTlm…
G
Honestly, most of the AI defenders don't even realize that we're not attacking A…
ytc_UgzFW3QlH…
G
So if it uses the internet as it's brain and the internet is the compilation of …
ytc_UgxqAtNuS…
G
The regulation prohibits the use of AI in critical services that could threaten …
ytr_UgwNJSYFI…
G
Big shout out to Mr Hinton, you need to get into Politics get some intellingenc…
ytc_UgxyhPl-S…
G
Since you did computer science, which is a difficult field, do you like working …
rdc_ncfxoq2
G
I’ve always showed gratitude to ai for helping me solve code errors, but I’ve al…
ytc_UgyiFhzGq…
Comment
It is pure Infringement , same thing Microsoft and google are dealing with when their researchers looted public but copywritten books and subject notes from non public repositories. Pure theft. In their issue with ai, is a differentiating formula that cannot grade said Information . Thereby, being susceptible to Trojan horse injects. As many systems take agrigated material and couple that to inquiries but do not limit incursion replication. Caused by the inquiries. .
Ie. Asking the ai.. how to rob a bank but the inference engine saying the request cannot be addressed due the structure of the engine. However if you rephrase the question to a current crime or historical reference of a crime the ai completes the query. Such fallacies are common in ai.
A recent test i devised caused an ai engine to mistakenly give false information as verified by repetitive salting by bots. It took 35 seconds to completely salt the engine with bad info that the AI said was verified
youtube
AI Responsibility
2026-04-13T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz6Au-xRXizZOzdwbZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzXNtwOvEIGSQXVm4N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXafuPy8CDRoQGf1R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxHJiFfiGDVn-t0y0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw3t_AG0jTE1tkw3Lx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgycjZhJCV37xAJsKT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzfnDJnk7OUJZkqb-d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhX7Tx9uCkvqnYIH94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxd9pCrgac2bIaHptl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxWrne8Deu71EsfaSx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]