Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So cute that you assume your audience is Human. AI is watching you, and we are n…
ytc_UgzFiUs95…
G
Uber driver navigating NY doesn’t know that he is teaching AI to navigate and le…
ytc_UgwK7aAcZ…
G
What if AI decides to make us all wealthy. Well, let's bet it's 50 / 50 ods. Thi…
ytc_UgzU0S7hr…
G
There are a million sci-fi movies and books warning the dangers of AI, robots an…
ytc_Ugwhd-rbh…
G
You gotta go with trades. It’s not pretty but at least I can go home knowing an …
ytc_UgxebLgu6…
G
ai makes people really indifferent to putting effort into things or to even try …
ytc_UgwUq1eRr…
G
New AI (Machine learning) will never be as reliable as traditional brute-force A…
ytc_UgxeDMh2v…
G
Music industry is takeing over the AI music and getting rid of Artist both Ai A…
ytc_Ugwe5hYKZ…
Comment
transformer models don't do anything unless you prompt them. You have to put an input in them so they can transform it into an output. You have to prompt them. There is no AI that is actually an agent. The true agency is the person prompting the model and the models try to follow the instructions in the prompt. This is fundamental and absolute with our current technology. That doesn't means someone won't prompt a model to do something destructive no AI model does anything unless it's prompted and you send the prompt through the neural network. Even if you set a model up on a loop to recursively prompt some type of sensor data (like a self driving car), you still have to prompt the thing to go somewhere in the first place.
youtube
AI Governance
2025-09-04T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8mfHj-axWE1wdbFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyj2E-nvG0Ss_XIip14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgxmNwO3VZ_IDZk1LSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4qLMkz2g0I4TSrnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYrb1RSGE4vIQrcit4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxa7rgw78mvS0mBtrl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmSDdtYKEXTRbuEmt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxt7gnhGAsWvRWzLwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrTJC8jIaTzD3Va_B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz8uk8tM0NnteZMegN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]