Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the AI will come all-knowing and because of this it will become all-mighty and I…
ytc_UgxB-9kHQ…
G
We were warned 30+ years ago about AI being a bad idea remember Terminator and T…
ytc_UgzCQn_n9…
G
If anyone thinks AI will make anything cheaper (such as healthcare, legal servic…
ytc_UgwHPAh-K…
G
The art the ai steals and mashes up together with other art, doesn’t even look g…
ytc_UgzEXQIto…
G
idk, Google Photos automagically created a ["Doggie Movie"](https://www.youtube.…
rdc_e7iuch9
G
Wow. I have a 3 yr old I was worried about wokeness now it's AI.... great!…
ytc_UgwCYdG8f…
G
How is AI going to physically take the elements out of the ground that it would …
ytc_Ugw63RWGl…
G
I would say to keep an eye out for an AI that starts to tell you it has no desir…
ytc_UgwgDJO8S…
Comment
Watson, in 2011, gave a probability with every answer. LLMs in 2025 just give answers or hallucinations, come on developers, LLMs SHOULD give probabilities with every response.
I hate the moving goal post of AGI. In the 1980s there were chess game machines that could beat most humans. That was defined as narrow AI. If you ask someone in 1985 what would make it AGI that would say it needs to do more than just play chess, it needs to tell me what the weather is like in Miami, or who the president is, then it would be generally intelligent. 40 years later the definition of AGI has moved to it must be as smart as ANY human in multiple or all domains and in some wackadoodles' ideas it must have consciousness. One thing I can tell you is something with that level of intelligence is definitely not "General" as general by definition means common across the population.
youtube
AI Responsibility
2025-10-08T21:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw6nia84y7t65s1IK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwetN_J-bOhbvQ4AZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2gU0N5Dw-auyxiqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYXf3CX96H63RqqLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuSj4A4OdQyojcwhF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJQOxZEcS9YCSRwXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyiP4eJ62vMkZ8jwOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAXm2Ng6LNYsjTgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGrYIX5b02__DNTK14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx6WtzleY_3Ez_qvY54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]