Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neil deGrasse Tyson sounded naive here. How can you create a job and sell yourse…
ytc_UgwIs7jZO…
G
Your follow me drone is already an autonomous weapon. A little software tweak an…
ytc_UgzFZB2Za…
G
Meanwhile he continues to fill the sky with strings of overlord tech that AI wil…
ytc_Ugxf9qH7h…
G
Why not create one of these AI Worker Manager whose sole goal is to try and solv…
ytc_Ugz0Hc6aA…
G
Absolutely loved the predictions from Sabine, Although they are predictions and …
ytc_UgzuexsL6…
G
ai is a prediction algorithm... Some races have a lower on average education or …
ytc_UgzEI751l…
G
I loved this. Those people going on about how they're going to take you down wit…
ytc_UgxzuZ2Ms…
G
i would set up ai at home and home school my kid instead cause schools have fai…
ytc_UgxwajkCR…
Comment
You know what? It’s terrifying. For creators, experts and scholars in the field to say that ai has a mind of its own and we need to collaboratively find a way to prevent it is terrifying. What do you mean prevent? If the creator made it then why all the damage potential? Shouldn’t there be parameters it should not cross?
youtube
AI Responsibility
2025-07-27T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw7ufMHhDiLv6rtUxJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyogR4X-AE2C0zV2OV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWwnu1Z2gHb8M06Jl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYIJYwsiTI7PoZNcF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxjpPgtnYX39-oZ3X14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzecMHGJmfm5Z3BQjl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzvlQnw8jf5_7Q-f14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugysjz5QJXWYa02cbLB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxB6IiYGI-ikm2CV2J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzN_9iUc9khM72byV94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]