Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@sweatygoblin2335 Ye, I agree with your view. It *should* be used as a tool…
ytr_UgweEJTe_…
G
That “if you’re a good person, you have nothing to worry about” line is one of t…
ytr_Ugx-tmtdO…
G
I like Ai art and Ai music but anyone that clams that they were the ones that ma…
ytc_Ugx4Vv9oF…
G
They better not call me for jury cus ima vote in favor of nyt jsut cus of my dis…
ytc_UgySEI9hF…
G
This is a much better topic to cover.
Tesla full self-driving will flash a mes…
ytr_Ugwb5k1eM…
G
Create a petition we sign it for you you make it viral ,we cancel all these comp…
ytc_UgxQnFUaE…
G
i under stand AI can be used in a bad way so can a car. but, that is not what th…
ytc_UgzQobz4y…
G
After saying ‘nah, ai won’t replace humans’ Gary finally hopped the wagon
Too la…
ytc_UgxFK_4TL…
Comment
To be honest, there is almost something comforting about the 3rd ending, an ending where an AI is willing to provide humans a sense of purpose and drive, while still being fundamentally greater than human in a universe where dangers abound. Yes humans essentially become pets, but it also means that our responsibility is reduced. It is like being a child again and being safeguarded by an 'adult'. Only question is can you trust that 'adult', but its hard to think you can't if an AI is willing to go so far as to fake inefficiency to give humans a sense of purpose and drive.
That means that on some level, it feels like it needs humanity, else why bother.
But tbh I fully expect improved AI will lead to a better understanding of the human brain and then direct augmentation of our own calculating capacity with 'hardware' improvements, and basically reducing the distance between a brain-machine interface. We won't type at the speed of hand or speech, but at the speed of thought, and thought processing itself will again be enhanced by augmentation.
youtube
2026-02-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwzVg-cotYsJ5O4gSZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvZu8SOtXV6BycygN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzmKY1eCZYhoU0-aFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZa6aRoacGr36A41N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAY3U96noDEs4TQI94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2iLK9UARgeePQmOJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy43NY1b6PDOnYW3zF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymJ-fCq08jTuNZtv54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGlOLfZOgo-H5vn5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbGjC9D8aJTzqUo-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]