Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with ai right now is that the way it learns is by gathering information and spitting out it's own interpretation. Consistently tuning it's understanding of what it thinks it is supposed to do. With that, you inevitably get uncanny ai generated results from which the ai and other ai draw from. Eventually the mashing of human input and ai input leads the ai into a death loop where it takes ai input as human input and learns from that. Unable to differentiate the two the program devolves and becomes useless. The government test that was recently spoken about and immediately walked back is a fine example of why ai cannot be allowed to function on its own. There needs to be a consistent stream of human input for the ai to learn from. Once the desired outcome has presented itself the ai should be forced to stop learning. Control is something humans will not give up. In movies the machines go haywire because they are given the keys to the world and left to their devices. People are not that trusting. However on the flip side the incompetence of those in positions of power is often jarring. Who knows? I don't bet on ai taking over anything any time soon.
youtube AI Governance 2023-07-07T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZon6b-Q1NHCYcLPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxyDEVVYS7ZtiTgeSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz94wP5JGrChj8-IVF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHCeBdpIebh4Gj_ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHt1YvcljuNrMfbsx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWpoHBZNGsfX6pMBJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCdkBkoUy6qGCf55t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7cl44rk2dykDc5F14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhI6-qnbT5uhCXDUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwvqezGa-lnuerAps94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]