Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My friend actually knows i chat with AI and we both do...lets just say...freaky …
ytc_Ugz7o3VCg…
G
@precooked-baconI’m from Bulgaria , and you are right about the outsourcing , we…
ytr_UgyG0htjf…
G
In some ways technology advanced our work force. Sad to see AI take over many in…
ytc_Ugy4NNdiP…
G
Humans have emotional intelligence. This is something that AI would need to deve…
ytc_UgytwJ3TM…
G
We all did this by ourselves by starting to buy online because we wanted things …
ytc_Ugx17o3_y…
G
The only thing AI is useful for is for generating hentai of really obsecure char…
ytc_Ugw7Npf_U…
G
AI has 2 weaknesses: water and power. Cut off its access to these resources and…
ytc_UgxL3X0vU…
G
Do you remember that there used to be people whose definition of consciousness t…
ytc_Ugw8XK306…
Comment
Being a biomedical scientist myself, I am cautiously optimistic about AI. Using AI tools like AlphaFold have already vastly accelerated our understanding of protein structures, and will revolutionize medicine as an easy example. It is absolutely insane how good of a tool this can be, something that used to take someone an entire PhD an AI can do in a couple minutes. It already *has* revolutionized my field, and it will only continue to get better at it, allowing us to solve ever more intricate problems and ask ever more in-depth questions.
That said, humans are shit, and AI without strict limiters will reflect that back to us. We've already seen it with Grock going mecha-hitler and sharing child-SHI. A terrorist group could use an AI that will not just get the materials and build the bomb, but can execute the mission almost if not wholly in its entirety. Scams are already becoming incredibly more intricate and personalized, and that's just going to grow. Companies are penny pinching like crazy already, adjusting price to the max they think you will still spend. All of your data is being collected constantly, where you go and when/who you talk to/what you buy/what music you listen to/etc, and all of that can be realistically interpreted in tandem with every other person to give the government a list of "undesirables" if it wanted to. Quality of life may have never been better before now and AI has the potential to make the best parts of humanity shine brighter than ever before, but similarly, the worst parts of humanity will have that much more of an opportunity to not just fester, but full on blister or even become metastatic.
A full AGI is, like what was mentioned, a black box that we can't know. We imagine it to be essentially Skynet or some Lovecraftian monstrosity that will eat us all, but we don't know that for sure. AI doesn't have a psychology, just a methodology. So it's equally likely to end humanity as we know it, or become the savior of humanity and solves some of our greatest crisis basically for us and usher in world peace. It could be a brutal authoritarian god-king, or a humble servant that waits on us endlessly, ready to serve in all capacities at all times. We have no way to know, and it's scary thinking about finding out.
Though I will say what scares me isn't AI as a tool in itself, it's what we do with these tools. Because the only thing we are sure of coming into this era, is that we are wholly unprepared. Who knows what comes next, but I do wish people who had power over these things were realistic about the pros and cons, instead of just....ignoring all of it in the name of profit.
youtube
2026-03-05T22:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyWBs9lJNL6C6Vva154AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0QQWCE6OiYqgnOeV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyzVYDTfZec-PruxOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1Nks6jWm9c7Kmk9t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxyrhxh7pF0NtL23554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxpwXYIHaM6FthYfNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQNzTypd2-eQNoWtV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzl0LCcUKtc0ch39iJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwf0CTOay_to7vvk554AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGUqMG60qmnjWDAn14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]