Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someday soon AI robotic ICE agents will patrol the streets asking for identifica…
ytc_Ugzp0hmqR…
G
7:26 You are wrong, if the students are using gpt to write essays for them, essa…
ytc_UgxpY73Tn…
G
There's a reason they chose Quake as a demo, and not, say, Skyrim. Or even Morro…
ytc_UgwBYgBhF…
G
Universal AI! Great idea! Everyone has access to AI to create their own best lif…
ytc_UgwQHf7_L…
G
This is just one piece of the puzzle of automation America is fixing to go throu…
ytc_Ugwqy8x8L…
G
God, im glad that i found so many artists i trust to be real artists before ai w…
ytc_UgzzZQ4YS…
G
Well, twitter artists need to quit over valuing their skill. Im happy to support…
ytc_Ugx7A22y_…
G
What about people who are nuts, is it possible that these partner AI can convinc…
ytc_Ugygjtk-T…
Comment
AI, without a doubt will change the future. When it will happen? It has already started. How it will happen? Look around, AI implementation is being embedded into simple task from personal use of a AI assistant on your phone to making software with a few prompts and more. The big question is WHAT will happen to humanity? The evolution of technology had further mankind's pursuits and endeavors for longevity, quality of life, and happiness. But it also further people with power to gain advantages and expand their civilization through warfare and consolidation of wealth. If AI can be controlled, those that hold the leash will they use their power to further humanities goals or their selfish desires for more control, power and wealth? If AI cannot be controlled, will it evolve to become our caretaker or see us as an detriment to it's own existence and turn on us and either control or destroy us? The future isn't written, but what mankind does in the present will be recorded in history.
youtube
2025-10-20T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyduUax8aXAaxWBCZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxkegbUjbDGXRs1R7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZwugD7yGkSe4x25B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySTUwVMxYZqUVI6s14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyarYjr72BHEDgJpNx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxKlsyJZTcEhJRysnx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwkiZi6ACF1Ypjv2_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3h8OillGziL183h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySBj6_hE2RJPw6AZ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxw4lP_wFRhxbG6B-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]