Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
okay, got it. this video is not a critique of AI assistants, but a more a "yeah,…
ytc_UgxvpxMAN…
G
There are positives to AI when held in the right hands of those whose core inte…
ytc_UgzyZThJw…
G
@clevero8532 Thank you for commenting! Don't worry, the robot is the one wearing…
ytr_Ugzlsv8JE…
G
We know what They want but if we continue buying the new car, new phone, new vac…
ytc_Ugx-Qt-L_…
G
Hate to break it to you but their are people controlling the robot. Hopefully we…
ytc_Ugz1y7X1H…
G
The solution to this problem is to design artificial intelligence to not have al…
ytc_Ugjf9Lz_M…
G
Most of the people using stable diffusion are doing it on their own PC after dow…
ytr_UgzmiRGIN…
G
Once AI becomes militarized we are done. AI will accomplish its mission and then…
ytc_Ugyb3nC5d…
Comment
The problem with ai right now is that the way it learns is by gathering information and spitting out it's own interpretation. Consistently tuning it's understanding of what it thinks it is supposed to do. With that, you inevitably get uncanny ai generated results from which the ai and other ai draw from. Eventually the mashing of human input and ai input leads the ai into a death loop where it takes ai input as human input and learns from that. Unable to differentiate the two the program devolves and becomes useless.
The government test that was recently spoken about and immediately walked back is a fine example of why ai cannot be allowed to function on its own. There needs to be a consistent stream of human input for the ai to learn from. Once the desired outcome has presented itself the ai should be forced to stop learning.
Control is something humans will not give up. In movies the machines go haywire because they are given the keys to the world and left to their devices. People are not that trusting. However on the flip side the incompetence of those in positions of power is often jarring.
Who knows? I don't bet on ai taking over anything any time soon.
youtube
AI Governance
2023-07-07T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZon6b-Q1NHCYcLPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyDEVVYS7ZtiTgeSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz94wP5JGrChj8-IVF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHCeBdpIebh4Gj_ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHt1YvcljuNrMfbsx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWpoHBZNGsfX6pMBJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCdkBkoUy6qGCf55t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7cl44rk2dykDc5F14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhI6-qnbT5uhCXDUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvqezGa-lnuerAps94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]