Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
26:50 What about "animal rights"? Animals are not persons, but many people recog…
ytc_Ugx1eep82…
G
@nagymancs yeah that basically a good summary of the alingment problem. If we co…
ytr_UgySjZJ4_…
G
Book launch clearly.... kind of obvious and also c**tish really to be blowing th…
ytc_UgxInHOFO…
G
How about we restrain them based on moral compass in a way, that worse actions i…
ytc_UgzUqbBJA…
G
If AI does end up helping us more than killing us lol then what will the corrupt…
ytc_UgziGps0f…
G
If AI truly had an consciousness, it would realize that everything, including it…
ytc_UgykOz1z3…
G
Might? Teacher here. It’s already happening. AI is open on laptops 24/7 and stud…
rdc_magy26j
G
And for those that don’t believe what I’m saying all you have to do is ask ChatG…
ytc_UgyAkww0C…
Comment
I studied machine learning in Grad school and was never the least bit impressed with LLMs. Until they can have trillions of neural nodes that are able to spontaneously reorganize themselves into new models to solve new problems, they are not AI. It is all either glorified autocorrect or the most powerful plagiarism engine of all time. Intelligence is so much more than language.
youtube
AI Responsibility
2025-10-01T20:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgClyWoxYLRYW34GR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTDruNWieRnapIv3F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9cMW7c6tG-Z5nPe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOHi105XGUaHJRDgx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxBjgjT-kTgpbjg-gx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrrq6x50GWp_wRfMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFFsHGGtHfEfRUAAp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzxbgZLdwuu3-ZZ3sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyLM6DBWbR41FAbvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX6cegPe1nsgU12Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]