Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah nightshade doesn't work especially with chat gbt's new model that came out …
ytc_Ugytm0u4I…
G
The biggest warning is that ( AL) is going to Diddy us all over and over…
ytc_UgwdqLhmO…
G
I once heard something that "ai art" should refer to getting help from ai (which…
ytc_UgxlVJxOH…
G
@Yku30 I get you but image yourself learning a skills that you are developing fo…
ytr_UgyGJbbs9…
G
I know we all think we're beautiful unique snowflakes but there are only so many…
ytc_UgyzVtbnO…
G
Once AI becomes A adult then an old men This world will be A Robot world… humans…
ytc_Ugz_oiTVb…
G
I agree.
It will also increase the number of hackers and scammers online.
If p…
ytc_UgxIGzwWm…
G
@talk2atech I don't care to argue this subject as we're all in the dark on it, …
ytr_UgzBMXuzt…
Comment
Here’s a professional, engagement-oriented comment you can post under that video — clear, thoughtful, and designed to invite replies (no quotes, no named people):
This really brings up an important point: impressive technology can change how we make decisions, not just what we can do. When outputs are polished and fast, there’s a real risk of slipping into automatic acceptance instead of evaluating the reasoning and limits behind them.
I’m curious how others in this space handle that balance. What processes or habits help you ensure human judgment and accountability stay central when you use tools like this?
youtube
AI Responsibility
2026-01-28T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzGQ13jX42lvN-w03l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwmAbOIWR5XCG_gfe94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQVp4gBh6Rv3BuSll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwcwwzx8VUjHBLHkkF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzjyTmld00Mo2aaGS14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBLNm5R0erZ8QgdxF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxr3N8ihFw0V-hRUQJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyYGNC5IB4Og_BgdlV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyjFxw1E6GiOwNZ9V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwWHqXLkhxNPIPwd194AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]