Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If it is coming from existing biases then you can show that with the data. The d…
ytc_UgzNYX_bf…
G
@AnDr3w066 I actually made a valid point, you’re just purposely ignoring it to c…
ytr_UgwCtNR7C…
G
Already AI making me completely isolated and free from the world. ( it feel goo…
ytc_UgywpSZxk…
G
@blueclocks7610 It’s 2025 most kids know about AI
A picture of me riding a drag…
ytr_UgzqWtLV1…
G
Notice how many times she inferred that it's about her programming. Humans have…
ytc_UgzT7iSZz…
G
AI and Robotics are going to take over the majority of roles, UBI will be needed…
ytc_UgzRAvstu…
G
Thank you for your feedback! We strive to provide high-quality content on our ch…
ytr_UgyQMCBDT…
G
We appreciate your observation! Indeed, AI technology is advancing rapidly, aimi…
ytr_Ugz5qXomj…
Comment
While companies may have good intentions when developing AI, they are still driven by profit motives and may prioritize their own interests over those of society. Additionally, without clear regulations and oversight from the government, companies may not always prioritize safety, privacy, and ethical concerns when developing AI.
Furthermore, AI is rapidly advancing and becoming more complex, making it difficult for companies to anticipate and mitigate potential risks and unintended consequences. Therefore, it is important for the government to establish regulations and standards that ensure the safe and responsible development, deployment, and use of AI.
youtube
2023-04-10T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxngsk4XS5IokdOwZp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyxVXWv0eR7lmu_OaN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0Sb86csC2A7rb-IV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIl5EjhiiEjpDQHet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxE9CNKfKGu4Mmguf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgymLIx7eVakjOabKkB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUhHgGRPJuD0dOFMF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgysX6Tl_iiMLchLPjZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyJ_oN-L_Imo1Z2vkJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy_yAmcX6VrwpYObr54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"indifference"}
]