Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everything she says is true, but people should just attack people who use ai art…
ytc_Ugwm0gBYo…
G
how is redrawing the ai piece mocking it? if i generated an ai image that made p…
ytc_Ugw2VZJ28…
G
am I the only one hearing ChatGPT *sound* more agitated and anxious the more que…
ytc_UgzlHsZjJ…
G
Take a Stand and take yall land back and put all of us out! TF! Why disrespect N…
ytc_UgziGGkYW…
G
@attackchopper2582Salut, alors tu penses que les développeurs Web ont une chanc…
ytr_UgztFTp1O…
G
It’s fascinating to hear Sam talk about AI becoming an “extension of ourselves.”…
ytc_UgxcQKwnM…
G
@fredumstadt593 i think you're right that we tend to anthropomorphize AI, which …
ytr_UgyVLgHho…
G
Chat gpt's image generation saved me from a HUGE pain in the ass when I needed a…
ytc_UgxHO6j9w…
Comment
You said chatbots lie in category 2 - limited risk. But healthcare/medical decisions lie in category 3 - high risk. So, if there is a chatbot or voice assistant (like Alexa) which provides medical advice, will it be in limited risk category 2 or high risk category 3?
youtube
AI Governance
2024-04-15T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyXqTN-2aZ5Xn0I6JN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyqHWkk1pH4lG2_8R4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyr9leIp_X8P2LHzCh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPWCu1w57TDg9GjM54AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzZnTG6AacyXlph6954AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8wQpvqO7TO5FKqUZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyZ-AXtX0Pkxtzav_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdY0ZJcpHxMpj10VZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyDoFPRPQFanMwQG5h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzvJRBdMAj2H8sL1IJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]