Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It can be helpful, but not in the "do my work for me" way, but in the "automatin…
rdc_mle5pcu
G
I think all guys are waiting for the Artificial Intelligence that really matters…
ytc_Ugy0jIVj4…
G
Companies are idiots to replace human. Robot or AI won't going to buy Products a…
ytc_UgxdZqU0G…
G
the music academy has failed its students to use ai to write music, all that hel…
ytc_UgxE6M60A…
G
36:31 AGI doesn't need to have "agency" in the same sense that humans do in orde…
ytc_UgzFczuHq…
G
So that's how we will be killed by ChatGPT mother of AI
In year: 2089…
ytc_UgzC_YhHk…
G
What are we using as a model for A.I. we wouldn't create it in our own image wou…
ytc_Ugw4RZK-W…
G
I think there comes a point when rules, prompts, guard rails, and fixing AI code…
ytc_UgzoAKlqF…
Comment
This is a dangerous ai that's like saying this hammer is a dangerous hammer look at what this guy is making the hammer say and do if an ai kills someone then someone has the remote control that is the the murder I can see this becoming a problem with people understanding that if an ai does anything there still is a actual living breathing Person to be held accountable and it's not the hammer
youtube
AI Governance
2024-02-20T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxtVrnZ6k41AyRYHph4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyS29x7rLEVq2RYtDR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_XKmw33426dfm6pZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_TLrCP46OZCf14n14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzYWcmFwUgyUYDloYR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxupasrPTYgl4FBbZN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZ9z6521AlP7rAfgR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugxhs4Y1L_37ujTQu9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0lxjjilUVlkkV1bN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugymu7_E7j-SHf20yOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]