Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your engagement with the video! If you have any questions related …
ytr_Ugx1thOxA…
G
Lol check back in 10 years with everyone saying AI is snake oil. Humanity genuin…
ytc_UgzeweMva…
G
So says the man that was pivotal in many of the main developments in AI tech as …
ytc_Ugxk7hrdb…
G
ChatGPT is a chatbot, not a person. The people responsible are the ones at the c…
ytr_UgyWCWOZl…
G
Makes me wonder if any cops have pit-maneuvered a Waymo yet because it was doing…
ytc_UgzdscQgM…
G
11:28
This scared me because the robot basically stated that AI will overtake h…
ytc_Ugxpnw4qS…
G
Open AI intended to keep it not-for-profit, and then Microsoft let them know abo…
ytc_Ugy24IcvT…
G
Clever AI Humanizer is wild 😮 Makes AI sound totally natural — highly recommend …
ytc_UgyUKDQK9…
Comment
AI is dangerous No matter how amazing it would be. It's a technology. Engineers and Everyone would be mindful when using it. It was just a tool and doesn't have a heart or emotions. Imagine "electricity " would start to talk it's the way AI talks. I got some experience. Thats why i says it. It can mislead someone.
youtube
2025-05-27T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyzYj2f0e-0qWVpCbx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7pHw0uAT39xbcufR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmsYmUDj9gnvqNIfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzI7kL96fps6iCezl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxdwZw9Tbqj_AvaRUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxGvxed4gtSrJSIgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlWuDM8DvshWaPdKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw62wCBPVaPK70L5eJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwenGEd2TaBghN_4uN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKbA9yPBvBosUz9AJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]