Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a Software Engineer, I will tell you this: AI doesn't think. It's just a bunc…
ytc_UgyzAkfq4…
G
Personal opinion? Since a LARGE cross section of the tech world (silicon valley…
ytc_UgymPTTA-…
G
So an artist using ChatGPT to create code for a game is all fine and normal, but…
ytc_Ugxm-09Jd…
G
Why was it released publicly? Without it people would get the impression that a …
ytr_UgzP2TOIE…
G
I think I remember when these featured formidable people. Am I imagining this? D…
ytc_UgyUca_Dd…
G
Your suggestion at the end that people need to watch more science and tech journ…
ytc_UgzpBXRmh…
G
To me it seems like the AI bot is like a speak and spell that nephilim can use w…
ytc_UgxDe6OkL…
G
I think we should keep that AI, seems like its doing a good work 😂🤣…
ytc_Ugz-_1vZN…
Comment
Fears grow when systems left untended. We can reduce this potential by cultivating AI ethics in education from the first step of beginners to the highest levels of expertise. In this way, we irrigate the future with responsibility, so that technology learns not only to calculate, but to care.
youtube
AI Responsibility
2026-02-22T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxzsr4GVmTucVz6rbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK78oatKYQLDijCMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1jMMzXw8Xx8kRoiJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLjUyYK-hKXnJlIbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzlx__mzd5bTF1rH_B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFdGChB0p6hnkswV94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBWEbP2c1fH90efVB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7i6dBWBYLJj4bvUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxxp2-6fYiEnX-jQvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz71qFAXQtTqxvTc2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]