Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People need to see AI in a different light. Stop asking if it is going to repla…
ytc_UgzCTXn6t…
G
Disingenuous argument.
The issue is forced knowledge sharing for use by AI with…
rdc_oi477wz
G
Great Video. Where did you get the data that chatGPT uses 0.5L for every 30 ques…
ytc_Ugx8XQ3FB…
G
Machines don't have the efficiency and feeling like humans, human art is Tresend…
ytc_UgyiHHJAv…
G
I loved your video, I have yet to find a good argument against AI.
I personall…
ytc_UgwLLdtwf…
G
Ask Gemini to show you a happy black family then ask it to show you a happy whit…
ytc_Ugw-qV83W…
G
Even robots are asshole drivers wow. Seems like 80% of people with a license sho…
ytc_Ugwna1FXZ…
G
Google has TPUs; they just rent out the NVIDIA GPUs, but all their AI developmen…
ytc_Ugzaymdmd…
Comment
Just sayin', it would be unlikely that humans will be destroyed by A.I IF we all don't have differences between eachother so while developing these machines, we wouldnt make ANY mistake that could possibly lead to humanity-vaporization. This is a crucual point cause if we work it right now, we might be saving the world
youtube
AI Responsibility
2024-08-23T16:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzB2pwt7p2e5-qWHxl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyskzGgEFCl2WAlPzJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz2B6iHgNeg_pQzALR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvVT8GkVfzynkx7VN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYCIs3t8N9sEVqH854AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyXQPD46beU3gMmn3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxeJ4xeWilpWBlwBqV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzMIAwrqv1JYVtZVVB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKvYZbOby7VEcjEaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwke_Q4r2rzWMr2fV14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]