Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not real ,the public buy into any shit it's all programming and apps the i…
ytc_UgwjrdE-T…
G
If you let AI do your homework, you won’t learn anything. If you use AI as a re…
ytc_Ugw5sA_fF…
G
AI exists.
Artists: nooo this is not real art don't take our jobs you don't want…
ytc_UgzshcW2T…
G
3:00 That Peter Thiel fumble is going to haunt him forever.
On another point, i…
ytc_UgyQeBNUq…
G
The problem I see with AI boom or rather the tech boom in general is that the ra…
ytc_Ugy87oekT…
G
"AI will kill us all...now watch this vid explaining how, that we made with AI"…
ytc_Ugx0Kmm6Z…
G
Exactly, AI is good and entertaining as long as they give it a label in it or in…
ytr_Ugxi0oKm4…
G
Funny ChatGPT said something different to me, "In Christianity, Jesus is believe…
ytc_Ugw3zdGD-…
Comment
I have been using LLMs for over 2 years now. The dissonance between what I see on the news about AGI and these models being a "threat" and the disappointing reality of their limits/ability to give meaningless answers in context is appalling. They are great at summarization and spitting code snippets/fixing errors in a chunk of code but on a large code base or when put into the context of a larger system they spit out non sense and it takes a huge effort to get good results. I recommend looking at the latest OECD report on AI capabilities where 50 experts scaled AI against a lot of domains as compared to humans. AI is at best at a 3/5 and in most cases at 2/5.
youtube
AI Moral Status
2025-06-06T07:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwlulVcGxox5iXILjB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyuVTxGSTG-BIU_V6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxxGllVGtpOGIgIjS94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGI508UsCf32ClWHF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN6Rf59yzQVy07nIR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWuWd0gKObh-EYpLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1VZ2RdMe2-m-D4WN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOkZhAglW0w0co3Dt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxK2lOp76Evfk3w3tN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLIuO4FBTw8L1CTud4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]