Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro, making things like masala movies, like Bollywood. AI always needs the human…
ytc_Ugxff0M59…
G
The Strange Point to Argue is that on the Smaller Scale Ai has existed for Years…
ytc_UgyAj2SIj…
G
If a lot people wont be needed in the future would tge whole issue around aborti…
ytc_UgzTEyk0y…
G
Another amazing Tyre response. Ai garbage should be banned because it was traine…
ytc_Ugxe1LKaC…
G
Nothing to do with purpose, there are not that many people who go to their job b…
ytc_UgwxPKO32…
G
What is happening because most of the population is not using AI or knows how to…
ytc_Ugz5He89-…
G
False. It pulls from information online. Found alot of white hate and refused to…
ytr_UgzDpmYgf…
G
XAi has an unhinged mode that is made to be sarcastic. I dont get the problem.
…
ytc_UgzuSEX-5…
Comment
In addition to "gets you better responses," a philosophical point I sometimes like to make, is that if the LLM is good enough to blur your line of perception with regards to "talking to a human" and "talking to a LLM", you're training your brain to be impolite to humans as well if you keep being rude to the LLM.
Granted, I don't think we're anywhere near that yet with current LLMs, they are so obviously nothing like a human. But, I keep reading comments by people who would claim otherwise. That worries me a bit but, either way, doesn't hurt to be polite.
youtube
AI Moral Status
2025-03-26T14:0…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxJiTGbw0MOWbob3Zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzM1Yz-I6I8OgnRDMp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgznecFxQIHTssqnSZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugzxbaz6MszYGbE55Ah4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzwyLwzQbqrzo8nhDB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyWdnmSt5_IhnnG9Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyRrzejXJiDOAcPtVl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwVLF7gnBAfal0Qump4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwUm82tnZaVq5W5s8F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugwd2_5L8ufmSz0oxf54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}]