Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are to expensive and need to sleep. It is only natural the market wants t…
ytc_UgiWeIUN8…
G
I like the software they use to remove the actual person fighting and switch it …
ytc_UgwUUBK0q…
G
We can play ChatGPT too.
ChatGPT said:
The question of whether Jesus explicit…
ytc_UgyhOyZlV…
G
Did you know AI is computer science that was programmed to be human so AI art is…
ytc_Ugyq-cndZ…
G
Too bad automobiles displaced horses. AI is going to help us. Retire and go to…
ytc_UgwqhhzRo…
G
What do you say to programmers who say AI can’t code and it’s all hype.…
ytr_UgzDiGcwI…
G
In Texas it's illegal to use AI on public servants or law enforcement..
48 stat…
ytc_Ugz6EuHsr…
G
That is so true! For some reason I always talked to chatgpt kindly - in a simila…
ytc_UgzQ__c8X…
Comment
The more I use chatbots like GPT, the more I feel we could do nearly the same with a (very) big Excel file, with lots of boolean/conditional formulas. I know LLMs use a quite different, non-deterministic technology, trained to target a particular type of answers, pumping data from billions of previous texts, with changing/unstable results. But anyway, behind it, there's engineering, not an autonomous intelligence.
youtube
AI Responsibility
2025-10-29T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzH89X6bUBv4wCZTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqGjPNwJ6QA0Fz-4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxQO09TTCNTIZLiEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy4NUAksRfnApvRwk14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgypZknkEThfR3Qywtx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOHQ0pyTPcxci4HiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyB20yVDFKkceDmmgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDRtwqT8lqpKvDHax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKB3YK0w4etax9s254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzprmYLK9plq4KKxah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]